00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 228 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3730 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.133 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.134 The recommended git tool is: git 00:00:00.134 using credential 00000000-0000-0000-0000-000000000002 00:00:00.136 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.185 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.235 Using shallow fetch with depth 1 00:00:00.235 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.235 > git --version # timeout=10 00:00:00.268 > git --version # 'git version 2.39.2' 00:00:00.268 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.293 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.293 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.302 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.313 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.325 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.325 > git config core.sparsecheckout # timeout=10 00:00:07.335 > git read-tree -mu HEAD # timeout=10 00:00:07.350 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.367 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.367 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.470 [Pipeline] Start of Pipeline 00:00:07.486 [Pipeline] library 00:00:07.488 Loading library shm_lib@master 00:00:07.488 Library shm_lib@master is cached. Copying from home. 00:00:07.500 [Pipeline] node 00:00:07.512 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.513 [Pipeline] { 00:00:07.521 [Pipeline] catchError 00:00:07.523 [Pipeline] { 00:00:07.532 [Pipeline] wrap 00:00:07.539 [Pipeline] { 00:00:07.547 [Pipeline] stage 00:00:07.549 [Pipeline] { (Prologue) 00:00:07.566 [Pipeline] echo 00:00:07.568 Node: VM-host-SM38 00:00:07.572 [Pipeline] cleanWs 00:00:07.581 [WS-CLEANUP] Deleting project workspace... 00:00:07.581 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.589 [WS-CLEANUP] done 00:00:07.827 [Pipeline] setCustomBuildProperty 00:00:07.897 [Pipeline] httpRequest 00:00:08.273 [Pipeline] echo 00:00:08.275 Sorcerer 10.211.164.20 is alive 00:00:08.282 [Pipeline] retry 00:00:08.284 [Pipeline] { 00:00:08.297 [Pipeline] httpRequest 00:00:08.302 HttpMethod: GET 00:00:08.303 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.304 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.305 Response Code: HTTP/1.1 200 OK 00:00:08.306 Success: Status code 200 is in the accepted range: 200,404 00:00:08.306 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.531 [Pipeline] } 00:00:09.549 [Pipeline] // retry 00:00:09.555 [Pipeline] sh 00:00:09.841 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.858 [Pipeline] httpRequest 00:00:10.294 [Pipeline] echo 00:00:10.295 Sorcerer 10.211.164.20 is alive 00:00:10.305 [Pipeline] retry 00:00:10.307 [Pipeline] { 00:00:10.323 [Pipeline] httpRequest 00:00:10.330 HttpMethod: GET 00:00:10.331 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:10.331 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:10.350 Response Code: HTTP/1.1 200 OK 00:00:10.350 Success: Status code 200 is in the accepted range: 200,404 00:00:10.351 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:53.533 [Pipeline] } 00:00:53.552 [Pipeline] // retry 00:00:53.560 [Pipeline] sh 00:00:53.850 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:56.414 [Pipeline] sh 00:00:56.699 + git -C spdk log --oneline -n5 00:00:56.699 b18e1bd62 version: v24.09.1-pre 00:00:56.699 19524ad45 version: v24.09 00:00:56.699 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:56.699 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:56.699 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:56.720 [Pipeline] withCredentials 00:00:56.732 > git --version # timeout=10 00:00:56.746 > git --version # 'git version 2.39.2' 00:00:56.766 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:56.768 [Pipeline] { 00:00:56.778 [Pipeline] retry 00:00:56.780 [Pipeline] { 00:00:56.795 [Pipeline] sh 00:00:57.081 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:57.357 [Pipeline] } 00:00:57.375 [Pipeline] // retry 00:00:57.380 [Pipeline] } 00:00:57.396 [Pipeline] // withCredentials 00:00:57.405 [Pipeline] httpRequest 00:00:57.880 [Pipeline] echo 00:00:57.887 Sorcerer 10.211.164.20 is alive 00:00:57.906 [Pipeline] retry 00:00:57.908 [Pipeline] { 00:00:57.922 [Pipeline] httpRequest 00:00:57.926 HttpMethod: GET 00:00:57.927 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.927 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.932 Response Code: HTTP/1.1 200 OK 00:00:57.932 Success: Status code 200 is in the accepted range: 200,404 00:00:57.932 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.677 [Pipeline] } 00:01:36.695 [Pipeline] // retry 00:01:36.703 [Pipeline] sh 00:01:36.990 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.922 [Pipeline] sh 00:01:39.210 + git -C dpdk log --oneline -n5 00:01:39.210 caf0f5d395 version: 22.11.4 00:01:39.210 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:39.210 dc9c799c7d vhost: fix missing spinlock unlock 00:01:39.210 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:39.210 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:39.229 [Pipeline] writeFile 00:01:39.245 [Pipeline] sh 00:01:39.532 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:39.544 [Pipeline] sh 00:01:39.827 + cat autorun-spdk.conf 00:01:39.827 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.827 SPDK_TEST_NVME=1 00:01:39.827 SPDK_TEST_FTL=1 00:01:39.827 SPDK_TEST_ISAL=1 00:01:39.827 SPDK_RUN_ASAN=1 00:01:39.827 SPDK_RUN_UBSAN=1 00:01:39.827 SPDK_TEST_XNVME=1 00:01:39.827 SPDK_TEST_NVME_FDP=1 00:01:39.827 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:39.827 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.827 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.835 RUN_NIGHTLY=1 00:01:39.837 [Pipeline] } 00:01:39.851 [Pipeline] // stage 00:01:39.865 [Pipeline] stage 00:01:39.867 [Pipeline] { (Run VM) 00:01:39.880 [Pipeline] sh 00:01:40.165 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:40.165 + echo 'Start stage prepare_nvme.sh' 00:01:40.165 Start stage prepare_nvme.sh 00:01:40.165 + [[ -n 2 ]] 00:01:40.165 + disk_prefix=ex2 00:01:40.165 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:40.165 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:40.165 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:40.165 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.165 ++ SPDK_TEST_NVME=1 00:01:40.165 ++ SPDK_TEST_FTL=1 00:01:40.165 ++ SPDK_TEST_ISAL=1 00:01:40.165 ++ SPDK_RUN_ASAN=1 00:01:40.165 ++ SPDK_RUN_UBSAN=1 00:01:40.165 ++ SPDK_TEST_XNVME=1 00:01:40.165 ++ SPDK_TEST_NVME_FDP=1 00:01:40.165 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.165 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:40.165 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.165 ++ RUN_NIGHTLY=1 00:01:40.165 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:40.165 + nvme_files=() 00:01:40.165 + declare -A nvme_files 00:01:40.165 + backend_dir=/var/lib/libvirt/images/backends 00:01:40.165 + nvme_files['nvme.img']=5G 00:01:40.165 + nvme_files['nvme-cmb.img']=5G 00:01:40.165 + nvme_files['nvme-multi0.img']=4G 00:01:40.165 + nvme_files['nvme-multi1.img']=4G 00:01:40.165 + nvme_files['nvme-multi2.img']=4G 00:01:40.165 + nvme_files['nvme-openstack.img']=8G 00:01:40.165 + nvme_files['nvme-zns.img']=5G 00:01:40.165 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:40.165 + (( SPDK_TEST_FTL == 1 )) 00:01:40.165 + nvme_files["nvme-ftl.img"]=6G 00:01:40.165 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:40.165 + nvme_files["nvme-fdp.img"]=1G 00:01:40.165 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:40.165 + for nvme in "${!nvme_files[@]}" 00:01:40.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:40.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.165 + for nvme in "${!nvme_files[@]}" 00:01:40.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:40.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:40.165 + for nvme in "${!nvme_files[@]}" 00:01:40.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:40.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.165 + for nvme in "${!nvme_files[@]}" 00:01:40.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:40.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:40.165 + for nvme in "${!nvme_files[@]}" 00:01:40.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:40.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.165 + for nvme in "${!nvme_files[@]}" 00:01:40.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:40.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.426 + for nvme in "${!nvme_files[@]}" 00:01:40.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:40.426 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.426 + for nvme in "${!nvme_files[@]}" 00:01:40.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:40.426 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:40.426 + for nvme in "${!nvme_files[@]}" 00:01:40.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:40.426 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.426 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:40.426 + echo 'End stage prepare_nvme.sh' 00:01:40.426 End stage prepare_nvme.sh 00:01:40.439 [Pipeline] sh 00:01:40.725 + DISTRO=fedora39 00:01:40.725 + CPUS=10 00:01:40.725 + RAM=12288 00:01:40.725 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:40.725 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:40.725 00:01:40.725 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:40.725 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:40.725 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:40.725 HELP=0 00:01:40.725 DRY_RUN=0 00:01:40.725 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:40.725 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:40.725 NVME_AUTO_CREATE=0 00:01:40.725 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:40.725 NVME_CMB=,,,, 00:01:40.725 NVME_PMR=,,,, 00:01:40.725 NVME_ZNS=,,,, 00:01:40.725 NVME_MS=true,,,, 00:01:40.725 NVME_FDP=,,,on, 00:01:40.725 SPDK_VAGRANT_DISTRO=fedora39 00:01:40.725 SPDK_VAGRANT_VMCPU=10 00:01:40.725 SPDK_VAGRANT_VMRAM=12288 00:01:40.725 SPDK_VAGRANT_PROVIDER=libvirt 00:01:40.725 SPDK_VAGRANT_HTTP_PROXY= 00:01:40.725 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:40.725 SPDK_OPENSTACK_NETWORK=0 00:01:40.725 VAGRANT_PACKAGE_BOX=0 00:01:40.725 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:40.725 FORCE_DISTRO=true 00:01:40.725 VAGRANT_BOX_VERSION= 00:01:40.725 EXTRA_VAGRANTFILES= 00:01:40.725 NIC_MODEL=e1000 00:01:40.725 00:01:40.725 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:40.725 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:43.274 Bringing machine 'default' up with 'libvirt' provider... 00:01:43.536 ==> default: Creating image (snapshot of base box volume). 00:01:43.797 ==> default: Creating domain with the following settings... 00:01:43.797 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734345223_9717114cbc1f8420ee4d 00:01:43.797 ==> default: -- Domain type: kvm 00:01:43.797 ==> default: -- Cpus: 10 00:01:43.797 ==> default: -- Feature: acpi 00:01:43.797 ==> default: -- Feature: apic 00:01:43.797 ==> default: -- Feature: pae 00:01:43.797 ==> default: -- Memory: 12288M 00:01:43.797 ==> default: -- Memory Backing: hugepages: 00:01:43.797 ==> default: -- Management MAC: 00:01:43.797 ==> default: -- Loader: 00:01:43.797 ==> default: -- Nvram: 00:01:43.797 ==> default: -- Base box: spdk/fedora39 00:01:43.797 ==> default: -- Storage pool: default 00:01:43.797 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734345223_9717114cbc1f8420ee4d.img (20G) 00:01:43.797 ==> default: -- Volume Cache: default 00:01:43.797 ==> default: -- Kernel: 00:01:43.797 ==> default: -- Initrd: 00:01:43.797 ==> default: -- Graphics Type: vnc 00:01:43.797 ==> default: -- Graphics Port: -1 00:01:43.797 ==> default: -- Graphics IP: 127.0.0.1 00:01:43.797 ==> default: -- Graphics Password: Not defined 00:01:43.797 ==> default: -- Video Type: cirrus 00:01:43.797 ==> default: -- Video VRAM: 9216 00:01:43.797 ==> default: -- Sound Type: 00:01:43.797 ==> default: -- Keymap: en-us 00:01:43.797 ==> default: -- TPM Path: 00:01:43.797 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:43.797 ==> default: -- Command line args: 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:43.797 ==> default: -> value=-drive, 00:01:43.797 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:43.797 ==> default: -> value=-drive, 00:01:43.797 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:43.797 ==> default: -> value=-drive, 00:01:43.797 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.797 ==> default: -> value=-drive, 00:01:43.797 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.797 ==> default: -> value=-drive, 00:01:43.797 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:43.797 ==> default: -> value=-drive, 00:01:43.797 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:43.797 ==> default: -> value=-device, 00:01:43.797 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.797 ==> default: Creating shared folders metadata... 00:01:43.797 ==> default: Starting domain. 00:01:45.714 ==> default: Waiting for domain to get an IP address... 00:02:03.935 ==> default: Waiting for SSH to become available... 00:02:03.935 ==> default: Configuring and enabling network interfaces... 00:02:07.245 default: SSH address: 192.168.121.22:22 00:02:07.245 default: SSH username: vagrant 00:02:07.245 default: SSH auth method: private key 00:02:09.795 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.939 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:23.232 ==> default: Mounting SSHFS shared folder... 00:02:25.179 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:25.179 ==> default: Checking Mount.. 00:02:26.122 ==> default: Folder Successfully Mounted! 00:02:26.122 00:02:26.122 SUCCESS! 00:02:26.122 00:02:26.122 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:26.122 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:26.122 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:26.122 00:02:26.133 [Pipeline] } 00:02:26.149 [Pipeline] // stage 00:02:26.159 [Pipeline] dir 00:02:26.159 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:26.161 [Pipeline] { 00:02:26.173 [Pipeline] catchError 00:02:26.175 [Pipeline] { 00:02:26.187 [Pipeline] sh 00:02:26.471 + vagrant ssh-config --host vagrant 00:02:26.471 + sed -ne '/^Host/,$p' 00:02:26.471 + tee ssh_conf 00:02:29.020 Host vagrant 00:02:29.020 HostName 192.168.121.22 00:02:29.020 User vagrant 00:02:29.020 Port 22 00:02:29.020 UserKnownHostsFile /dev/null 00:02:29.020 StrictHostKeyChecking no 00:02:29.020 PasswordAuthentication no 00:02:29.020 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:29.020 IdentitiesOnly yes 00:02:29.020 LogLevel FATAL 00:02:29.020 ForwardAgent yes 00:02:29.020 ForwardX11 yes 00:02:29.020 00:02:29.035 [Pipeline] withEnv 00:02:29.037 [Pipeline] { 00:02:29.051 [Pipeline] sh 00:02:29.334 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:29.335 source /etc/os-release 00:02:29.335 [[ -e /image.version ]] && img=$(< /image.version) 00:02:29.335 # Minimal, systemd-like check. 00:02:29.335 if [[ -e /.dockerenv ]]; then 00:02:29.335 # Clear garbage from the node'\''s name: 00:02:29.335 # agt-er_autotest_547-896 -> autotest_547-896 00:02:29.335 # $HOSTNAME is the actual container id 00:02:29.335 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:29.335 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:29.335 # We can assume this is a mount from a host where container is running, 00:02:29.335 # so fetch its hostname to easily identify the target swarm worker. 00:02:29.335 container="$(< /etc/hostname) ($agent)" 00:02:29.335 else 00:02:29.335 # Fallback 00:02:29.335 container=$agent 00:02:29.335 fi 00:02:29.335 fi 00:02:29.335 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:29.335 ' 00:02:29.610 [Pipeline] } 00:02:29.625 [Pipeline] // withEnv 00:02:29.634 [Pipeline] setCustomBuildProperty 00:02:29.647 [Pipeline] stage 00:02:29.649 [Pipeline] { (Tests) 00:02:29.666 [Pipeline] sh 00:02:29.950 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:30.227 [Pipeline] sh 00:02:30.513 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:30.789 [Pipeline] timeout 00:02:30.790 Timeout set to expire in 50 min 00:02:30.791 [Pipeline] { 00:02:30.805 [Pipeline] sh 00:02:31.090 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:31.702 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:31.716 [Pipeline] sh 00:02:32.000 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:32.277 [Pipeline] sh 00:02:32.562 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:32.840 [Pipeline] sh 00:02:33.126 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:33.388 ++ readlink -f spdk_repo 00:02:33.388 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:33.388 + [[ -n /home/vagrant/spdk_repo ]] 00:02:33.388 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:33.388 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:33.388 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:33.388 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:33.388 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:33.388 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:33.388 + cd /home/vagrant/spdk_repo 00:02:33.388 + source /etc/os-release 00:02:33.388 ++ NAME='Fedora Linux' 00:02:33.388 ++ VERSION='39 (Cloud Edition)' 00:02:33.388 ++ ID=fedora 00:02:33.388 ++ VERSION_ID=39 00:02:33.388 ++ VERSION_CODENAME= 00:02:33.388 ++ PLATFORM_ID=platform:f39 00:02:33.388 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:33.388 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:33.388 ++ LOGO=fedora-logo-icon 00:02:33.388 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:33.388 ++ HOME_URL=https://fedoraproject.org/ 00:02:33.388 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:33.388 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:33.388 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:33.388 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:33.388 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:33.388 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:33.388 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:33.388 ++ SUPPORT_END=2024-11-12 00:02:33.388 ++ VARIANT='Cloud Edition' 00:02:33.388 ++ VARIANT_ID=cloud 00:02:33.388 + uname -a 00:02:33.388 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:33.388 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:33.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:33.910 Hugepages 00:02:33.910 node hugesize free / total 00:02:33.910 node0 1048576kB 0 / 0 00:02:33.910 node0 2048kB 0 / 0 00:02:33.910 00:02:33.910 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:33.911 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:33.911 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:33.911 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:33.911 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:33.911 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:34.172 + rm -f /tmp/spdk-ld-path 00:02:34.172 + source autorun-spdk.conf 00:02:34.172 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.172 ++ SPDK_TEST_NVME=1 00:02:34.172 ++ SPDK_TEST_FTL=1 00:02:34.172 ++ SPDK_TEST_ISAL=1 00:02:34.172 ++ SPDK_RUN_ASAN=1 00:02:34.172 ++ SPDK_RUN_UBSAN=1 00:02:34.172 ++ SPDK_TEST_XNVME=1 00:02:34.172 ++ SPDK_TEST_NVME_FDP=1 00:02:34.172 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:34.172 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:34.172 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.172 ++ RUN_NIGHTLY=1 00:02:34.172 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:34.172 + [[ -n '' ]] 00:02:34.172 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:34.172 + for M in /var/spdk/build-*-manifest.txt 00:02:34.172 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:34.172 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.172 + for M in /var/spdk/build-*-manifest.txt 00:02:34.172 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:34.172 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.172 + for M in /var/spdk/build-*-manifest.txt 00:02:34.172 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:34.172 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.172 ++ uname 00:02:34.172 + [[ Linux == \L\i\n\u\x ]] 00:02:34.172 + sudo dmesg -T 00:02:34.172 + sudo dmesg --clear 00:02:34.172 + dmesg_pid=5778 00:02:34.172 + [[ Fedora Linux == FreeBSD ]] 00:02:34.172 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:34.172 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:34.172 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:34.172 + [[ -x /usr/src/fio-static/fio ]] 00:02:34.172 + sudo dmesg -Tw 00:02:34.172 + export FIO_BIN=/usr/src/fio-static/fio 00:02:34.172 + FIO_BIN=/usr/src/fio-static/fio 00:02:34.172 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:34.172 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:34.172 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:34.172 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:34.172 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:34.172 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:34.172 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:34.172 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:34.172 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:34.172 Test configuration: 00:02:34.172 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.172 SPDK_TEST_NVME=1 00:02:34.172 SPDK_TEST_FTL=1 00:02:34.172 SPDK_TEST_ISAL=1 00:02:34.172 SPDK_RUN_ASAN=1 00:02:34.172 SPDK_RUN_UBSAN=1 00:02:34.172 SPDK_TEST_XNVME=1 00:02:34.172 SPDK_TEST_NVME_FDP=1 00:02:34.172 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:34.172 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:34.172 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.172 RUN_NIGHTLY=1 10:34:34 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:34.172 10:34:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:34.172 10:34:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:34.172 10:34:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:34.172 10:34:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:34.172 10:34:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:34.172 10:34:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.172 10:34:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.172 10:34:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.172 10:34:34 -- paths/export.sh@5 -- $ export PATH 00:02:34.172 10:34:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.172 10:34:34 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:34.172 10:34:34 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:34.172 10:34:34 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734345274.XXXXXX 00:02:34.172 10:34:34 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734345274.t9ju1B 00:02:34.172 10:34:34 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:34.172 10:34:34 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:34.172 10:34:34 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:34.172 10:34:34 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:34.172 10:34:34 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:34.172 10:34:34 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:34.172 10:34:34 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:34.172 10:34:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:34.172 10:34:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.433 10:34:34 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:34.433 10:34:34 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:34.433 10:34:34 -- pm/common@17 -- $ local monitor 00:02:34.433 10:34:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.433 10:34:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.433 10:34:34 -- pm/common@25 -- $ sleep 1 00:02:34.433 10:34:34 -- pm/common@21 -- $ date +%s 00:02:34.433 10:34:34 -- pm/common@21 -- $ date +%s 00:02:34.433 10:34:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734345274 00:02:34.433 10:34:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734345274 00:02:34.433 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734345274_collect-cpu-load.pm.log 00:02:34.433 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734345274_collect-vmstat.pm.log 00:02:35.376 10:34:35 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:35.376 10:34:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:35.376 10:34:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:35.376 10:34:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:35.376 10:34:35 -- spdk/autobuild.sh@16 -- $ date -u 00:02:35.376 Mon Dec 16 10:34:35 AM UTC 2024 00:02:35.376 10:34:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:35.376 v24.09-1-gb18e1bd62 00:02:35.376 10:34:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:35.376 10:34:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:35.376 10:34:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:35.376 10:34:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:35.376 10:34:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.376 ************************************ 00:02:35.376 START TEST asan 00:02:35.376 ************************************ 00:02:35.376 using asan 00:02:35.376 10:34:35 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:35.376 00:02:35.376 real 0m0.000s 00:02:35.376 user 0m0.000s 00:02:35.376 sys 0m0.000s 00:02:35.376 ************************************ 00:02:35.376 10:34:35 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:35.376 10:34:35 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:35.376 END TEST asan 00:02:35.376 ************************************ 00:02:35.376 10:34:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:35.376 10:34:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:35.376 10:34:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:35.376 10:34:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:35.376 10:34:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.376 ************************************ 00:02:35.376 START TEST ubsan 00:02:35.376 ************************************ 00:02:35.376 using ubsan 00:02:35.377 10:34:35 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:35.377 00:02:35.377 real 0m0.000s 00:02:35.377 user 0m0.000s 00:02:35.377 sys 0m0.000s 00:02:35.377 10:34:35 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:35.377 ************************************ 00:02:35.377 END TEST ubsan 00:02:35.377 ************************************ 00:02:35.377 10:34:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:35.377 10:34:35 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:35.377 10:34:35 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:35.377 10:34:35 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:35.377 10:34:35 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:35.377 10:34:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:35.377 10:34:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.377 ************************************ 00:02:35.377 START TEST build_native_dpdk 00:02:35.377 ************************************ 00:02:35.377 10:34:35 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:35.377 caf0f5d395 version: 22.11.4 00:02:35.377 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:35.377 dc9c799c7d vhost: fix missing spinlock unlock 00:02:35.377 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:35.377 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:35.377 10:34:35 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:35.377 10:34:35 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:35.638 patching file config/rte_config.h 00:02:35.638 Hunk #1 succeeded at 60 (offset 1 line). 00:02:35.638 10:34:35 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:35.638 10:34:35 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:35.638 patching file lib/pcapng/rte_pcapng.c 00:02:35.638 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:35.638 10:34:35 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:35.638 10:34:35 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:35.638 10:34:35 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:35.638 10:34:35 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:35.638 10:34:35 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:35.638 10:34:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:35.638 10:34:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.846 The Meson build system 00:02:39.846 Version: 1.5.0 00:02:39.846 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:39.846 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:39.846 Build type: native build 00:02:39.846 Program cat found: YES (/usr/bin/cat) 00:02:39.846 Project name: DPDK 00:02:39.846 Project version: 22.11.4 00:02:39.846 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.846 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:39.846 Host machine cpu family: x86_64 00:02:39.846 Host machine cpu: x86_64 00:02:39.846 Message: ## Building in Developer Mode ## 00:02:39.846 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.846 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:39.846 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.846 Program objdump found: YES (/usr/bin/objdump) 00:02:39.846 Program python3 found: YES (/usr/bin/python3) 00:02:39.846 Program cat found: YES (/usr/bin/cat) 00:02:39.846 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:39.846 Checking for size of "void *" : 8 00:02:39.846 Checking for size of "void *" : 8 (cached) 00:02:39.846 Library m found: YES 00:02:39.846 Library numa found: YES 00:02:39.846 Has header "numaif.h" : YES 00:02:39.846 Library fdt found: NO 00:02:39.846 Library execinfo found: NO 00:02:39.846 Has header "execinfo.h" : YES 00:02:39.846 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.846 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.846 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.846 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.846 Run-time dependency openssl found: YES 3.1.1 00:02:39.846 Run-time dependency libpcap found: YES 1.10.4 00:02:39.846 Has header "pcap.h" with dependency libpcap: YES 00:02:39.846 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.846 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.846 Compiler for C supports arguments -Wformat: YES 00:02:39.846 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.846 Compiler for C supports arguments -Wformat-security: NO 00:02:39.846 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.846 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.846 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.846 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.846 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.846 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.846 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.846 Compiler for C supports arguments -Wundef: YES 00:02:39.846 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.846 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.846 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.846 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.846 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.846 Compiler for C supports arguments -mavx512f: YES 00:02:39.846 Checking if "AVX512 checking" compiles: YES 00:02:39.846 Fetching value of define "__SSE4_2__" : 1 00:02:39.846 Fetching value of define "__AES__" : 1 00:02:39.846 Fetching value of define "__AVX__" : 1 00:02:39.846 Fetching value of define "__AVX2__" : 1 00:02:39.846 Fetching value of define "__AVX512BW__" : 1 00:02:39.846 Fetching value of define "__AVX512CD__" : 1 00:02:39.846 Fetching value of define "__AVX512DQ__" : 1 00:02:39.846 Fetching value of define "__AVX512F__" : 1 00:02:39.846 Fetching value of define "__AVX512VL__" : 1 00:02:39.846 Fetching value of define "__PCLMUL__" : 1 00:02:39.846 Fetching value of define "__RDRND__" : 1 00:02:39.846 Fetching value of define "__RDSEED__" : 1 00:02:39.846 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:39.846 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.846 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.846 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.846 Checking for function "getentropy" : YES 00:02:39.846 Message: lib/eal: Defining dependency "eal" 00:02:39.846 Message: lib/ring: Defining dependency "ring" 00:02:39.847 Message: lib/rcu: Defining dependency "rcu" 00:02:39.847 Message: lib/mempool: Defining dependency "mempool" 00:02:39.847 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.847 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:39.847 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:39.847 Compiler for C supports arguments -mpclmul: YES 00:02:39.847 Compiler for C supports arguments -maes: YES 00:02:39.847 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.847 Compiler for C supports arguments -mavx512bw: YES 00:02:39.847 Compiler for C supports arguments -mavx512dq: YES 00:02:39.847 Compiler for C supports arguments -mavx512vl: YES 00:02:39.847 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.847 Compiler for C supports arguments -mavx2: YES 00:02:39.847 Compiler for C supports arguments -mavx: YES 00:02:39.847 Message: lib/net: Defining dependency "net" 00:02:39.847 Message: lib/meter: Defining dependency "meter" 00:02:39.847 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.847 Message: lib/pci: Defining dependency "pci" 00:02:39.847 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.847 Message: lib/metrics: Defining dependency "metrics" 00:02:39.847 Message: lib/hash: Defining dependency "hash" 00:02:39.847 Message: lib/timer: Defining dependency "timer" 00:02:39.847 Fetching value of define "__AVX2__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.847 Message: lib/acl: Defining dependency "acl" 00:02:39.847 Message: lib/bbdev: Defining dependency "bbdev" 00:02:39.847 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:39.847 Run-time dependency libelf found: YES 0.191 00:02:39.847 Message: lib/bpf: Defining dependency "bpf" 00:02:39.847 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:39.847 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.847 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.847 Message: lib/distributor: Defining dependency "distributor" 00:02:39.847 Message: lib/efd: Defining dependency "efd" 00:02:39.847 Message: lib/eventdev: Defining dependency "eventdev" 00:02:39.847 Message: lib/gpudev: Defining dependency "gpudev" 00:02:39.847 Message: lib/gro: Defining dependency "gro" 00:02:39.847 Message: lib/gso: Defining dependency "gso" 00:02:39.847 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:39.847 Message: lib/jobstats: Defining dependency "jobstats" 00:02:39.847 Message: lib/latencystats: Defining dependency "latencystats" 00:02:39.847 Message: lib/lpm: Defining dependency "lpm" 00:02:39.847 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512IFMA__" : 1 00:02:39.847 Message: lib/member: Defining dependency "member" 00:02:39.847 Message: lib/pcapng: Defining dependency "pcapng" 00:02:39.847 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.847 Message: lib/power: Defining dependency "power" 00:02:39.847 Message: lib/rawdev: Defining dependency "rawdev" 00:02:39.847 Message: lib/regexdev: Defining dependency "regexdev" 00:02:39.847 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.847 Message: lib/rib: Defining dependency "rib" 00:02:39.847 Message: lib/reorder: Defining dependency "reorder" 00:02:39.847 Message: lib/sched: Defining dependency "sched" 00:02:39.847 Message: lib/security: Defining dependency "security" 00:02:39.847 Message: lib/stack: Defining dependency "stack" 00:02:39.847 Has header "linux/userfaultfd.h" : YES 00:02:39.847 Message: lib/vhost: Defining dependency "vhost" 00:02:39.847 Message: lib/ipsec: Defining dependency "ipsec" 00:02:39.847 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.847 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.847 Message: lib/fib: Defining dependency "fib" 00:02:39.847 Message: lib/port: Defining dependency "port" 00:02:39.847 Message: lib/pdump: Defining dependency "pdump" 00:02:39.847 Message: lib/table: Defining dependency "table" 00:02:39.847 Message: lib/pipeline: Defining dependency "pipeline" 00:02:39.847 Message: lib/graph: Defining dependency "graph" 00:02:39.847 Message: lib/node: Defining dependency "node" 00:02:39.847 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.847 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.847 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.847 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.847 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:39.847 Compiler for C supports arguments -Wno-unused-value: YES 00:02:39.847 Compiler for C supports arguments -Wno-format: YES 00:02:39.847 Compiler for C supports arguments -Wno-format-security: YES 00:02:39.847 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:39.847 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.847 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:39.847 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:40.789 Fetching value of define "__AVX2__" : 1 (cached) 00:02:40.789 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.789 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.789 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.789 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:40.789 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:40.789 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:40.789 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:40.789 Configuring doxy-api.conf using configuration 00:02:40.789 Program sphinx-build found: NO 00:02:40.789 Configuring rte_build_config.h using configuration 00:02:40.789 Message: 00:02:40.789 ================= 00:02:40.789 Applications Enabled 00:02:40.789 ================= 00:02:40.789 00:02:40.789 apps: 00:02:40.789 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:40.789 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:40.789 test-security-perf, 00:02:40.789 00:02:40.789 Message: 00:02:40.789 ================= 00:02:40.789 Libraries Enabled 00:02:40.789 ================= 00:02:40.789 00:02:40.789 libs: 00:02:40.789 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:40.789 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:40.789 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:40.789 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:40.789 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:40.789 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:40.789 table, pipeline, graph, node, 00:02:40.789 00:02:40.789 Message: 00:02:40.789 =============== 00:02:40.789 Drivers Enabled 00:02:40.789 =============== 00:02:40.789 00:02:40.789 common: 00:02:40.789 00:02:40.789 bus: 00:02:40.789 pci, vdev, 00:02:40.789 mempool: 00:02:40.789 ring, 00:02:40.789 dma: 00:02:40.789 00:02:40.789 net: 00:02:40.789 i40e, 00:02:40.789 raw: 00:02:40.789 00:02:40.789 crypto: 00:02:40.789 00:02:40.789 compress: 00:02:40.789 00:02:40.789 regex: 00:02:40.789 00:02:40.789 vdpa: 00:02:40.789 00:02:40.789 event: 00:02:40.789 00:02:40.789 baseband: 00:02:40.789 00:02:40.789 gpu: 00:02:40.789 00:02:40.789 00:02:40.789 Message: 00:02:40.789 ================= 00:02:40.789 Content Skipped 00:02:40.789 ================= 00:02:40.789 00:02:40.789 apps: 00:02:40.789 00:02:40.789 libs: 00:02:40.789 kni: explicitly disabled via build config (deprecated lib) 00:02:40.789 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:40.789 00:02:40.789 drivers: 00:02:40.789 common/cpt: not in enabled drivers build config 00:02:40.789 common/dpaax: not in enabled drivers build config 00:02:40.789 common/iavf: not in enabled drivers build config 00:02:40.789 common/idpf: not in enabled drivers build config 00:02:40.789 common/mvep: not in enabled drivers build config 00:02:40.789 common/octeontx: not in enabled drivers build config 00:02:40.789 bus/auxiliary: not in enabled drivers build config 00:02:40.789 bus/dpaa: not in enabled drivers build config 00:02:40.789 bus/fslmc: not in enabled drivers build config 00:02:40.789 bus/ifpga: not in enabled drivers build config 00:02:40.789 bus/vmbus: not in enabled drivers build config 00:02:40.789 common/cnxk: not in enabled drivers build config 00:02:40.789 common/mlx5: not in enabled drivers build config 00:02:40.789 common/qat: not in enabled drivers build config 00:02:40.789 common/sfc_efx: not in enabled drivers build config 00:02:40.789 mempool/bucket: not in enabled drivers build config 00:02:40.789 mempool/cnxk: not in enabled drivers build config 00:02:40.789 mempool/dpaa: not in enabled drivers build config 00:02:40.789 mempool/dpaa2: not in enabled drivers build config 00:02:40.789 mempool/octeontx: not in enabled drivers build config 00:02:40.789 mempool/stack: not in enabled drivers build config 00:02:40.789 dma/cnxk: not in enabled drivers build config 00:02:40.789 dma/dpaa: not in enabled drivers build config 00:02:40.789 dma/dpaa2: not in enabled drivers build config 00:02:40.789 dma/hisilicon: not in enabled drivers build config 00:02:40.789 dma/idxd: not in enabled drivers build config 00:02:40.789 dma/ioat: not in enabled drivers build config 00:02:40.789 dma/skeleton: not in enabled drivers build config 00:02:40.789 net/af_packet: not in enabled drivers build config 00:02:40.789 net/af_xdp: not in enabled drivers build config 00:02:40.789 net/ark: not in enabled drivers build config 00:02:40.789 net/atlantic: not in enabled drivers build config 00:02:40.789 net/avp: not in enabled drivers build config 00:02:40.789 net/axgbe: not in enabled drivers build config 00:02:40.789 net/bnx2x: not in enabled drivers build config 00:02:40.789 net/bnxt: not in enabled drivers build config 00:02:40.789 net/bonding: not in enabled drivers build config 00:02:40.789 net/cnxk: not in enabled drivers build config 00:02:40.789 net/cxgbe: not in enabled drivers build config 00:02:40.789 net/dpaa: not in enabled drivers build config 00:02:40.789 net/dpaa2: not in enabled drivers build config 00:02:40.789 net/e1000: not in enabled drivers build config 00:02:40.789 net/ena: not in enabled drivers build config 00:02:40.789 net/enetc: not in enabled drivers build config 00:02:40.789 net/enetfec: not in enabled drivers build config 00:02:40.789 net/enic: not in enabled drivers build config 00:02:40.789 net/failsafe: not in enabled drivers build config 00:02:40.789 net/fm10k: not in enabled drivers build config 00:02:40.789 net/gve: not in enabled drivers build config 00:02:40.789 net/hinic: not in enabled drivers build config 00:02:40.789 net/hns3: not in enabled drivers build config 00:02:40.789 net/iavf: not in enabled drivers build config 00:02:40.789 net/ice: not in enabled drivers build config 00:02:40.789 net/idpf: not in enabled drivers build config 00:02:40.789 net/igc: not in enabled drivers build config 00:02:40.789 net/ionic: not in enabled drivers build config 00:02:40.789 net/ipn3ke: not in enabled drivers build config 00:02:40.789 net/ixgbe: not in enabled drivers build config 00:02:40.789 net/kni: not in enabled drivers build config 00:02:40.789 net/liquidio: not in enabled drivers build config 00:02:40.789 net/mana: not in enabled drivers build config 00:02:40.789 net/memif: not in enabled drivers build config 00:02:40.790 net/mlx4: not in enabled drivers build config 00:02:40.790 net/mlx5: not in enabled drivers build config 00:02:40.790 net/mvneta: not in enabled drivers build config 00:02:40.790 net/mvpp2: not in enabled drivers build config 00:02:40.790 net/netvsc: not in enabled drivers build config 00:02:40.790 net/nfb: not in enabled drivers build config 00:02:40.790 net/nfp: not in enabled drivers build config 00:02:40.790 net/ngbe: not in enabled drivers build config 00:02:40.790 net/null: not in enabled drivers build config 00:02:40.790 net/octeontx: not in enabled drivers build config 00:02:40.790 net/octeon_ep: not in enabled drivers build config 00:02:40.790 net/pcap: not in enabled drivers build config 00:02:40.790 net/pfe: not in enabled drivers build config 00:02:40.790 net/qede: not in enabled drivers build config 00:02:40.790 net/ring: not in enabled drivers build config 00:02:40.790 net/sfc: not in enabled drivers build config 00:02:40.790 net/softnic: not in enabled drivers build config 00:02:40.790 net/tap: not in enabled drivers build config 00:02:40.790 net/thunderx: not in enabled drivers build config 00:02:40.790 net/txgbe: not in enabled drivers build config 00:02:40.790 net/vdev_netvsc: not in enabled drivers build config 00:02:40.790 net/vhost: not in enabled drivers build config 00:02:40.790 net/virtio: not in enabled drivers build config 00:02:40.790 net/vmxnet3: not in enabled drivers build config 00:02:40.790 raw/cnxk_bphy: not in enabled drivers build config 00:02:40.790 raw/cnxk_gpio: not in enabled drivers build config 00:02:40.790 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:40.790 raw/ifpga: not in enabled drivers build config 00:02:40.790 raw/ntb: not in enabled drivers build config 00:02:40.790 raw/skeleton: not in enabled drivers build config 00:02:40.790 crypto/armv8: not in enabled drivers build config 00:02:40.790 crypto/bcmfs: not in enabled drivers build config 00:02:40.790 crypto/caam_jr: not in enabled drivers build config 00:02:40.790 crypto/ccp: not in enabled drivers build config 00:02:40.790 crypto/cnxk: not in enabled drivers build config 00:02:40.790 crypto/dpaa_sec: not in enabled drivers build config 00:02:40.790 crypto/dpaa2_sec: not in enabled drivers build config 00:02:40.790 crypto/ipsec_mb: not in enabled drivers build config 00:02:40.790 crypto/mlx5: not in enabled drivers build config 00:02:40.790 crypto/mvsam: not in enabled drivers build config 00:02:40.790 crypto/nitrox: not in enabled drivers build config 00:02:40.790 crypto/null: not in enabled drivers build config 00:02:40.790 crypto/octeontx: not in enabled drivers build config 00:02:40.790 crypto/openssl: not in enabled drivers build config 00:02:40.790 crypto/scheduler: not in enabled drivers build config 00:02:40.790 crypto/uadk: not in enabled drivers build config 00:02:40.790 crypto/virtio: not in enabled drivers build config 00:02:40.790 compress/isal: not in enabled drivers build config 00:02:40.790 compress/mlx5: not in enabled drivers build config 00:02:40.790 compress/octeontx: not in enabled drivers build config 00:02:40.790 compress/zlib: not in enabled drivers build config 00:02:40.790 regex/mlx5: not in enabled drivers build config 00:02:40.790 regex/cn9k: not in enabled drivers build config 00:02:40.790 vdpa/ifc: not in enabled drivers build config 00:02:40.790 vdpa/mlx5: not in enabled drivers build config 00:02:40.790 vdpa/sfc: not in enabled drivers build config 00:02:40.790 event/cnxk: not in enabled drivers build config 00:02:40.790 event/dlb2: not in enabled drivers build config 00:02:40.790 event/dpaa: not in enabled drivers build config 00:02:40.790 event/dpaa2: not in enabled drivers build config 00:02:40.790 event/dsw: not in enabled drivers build config 00:02:40.790 event/opdl: not in enabled drivers build config 00:02:40.790 event/skeleton: not in enabled drivers build config 00:02:40.790 event/sw: not in enabled drivers build config 00:02:40.790 event/octeontx: not in enabled drivers build config 00:02:40.790 baseband/acc: not in enabled drivers build config 00:02:40.790 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:40.790 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:40.790 baseband/la12xx: not in enabled drivers build config 00:02:40.790 baseband/null: not in enabled drivers build config 00:02:40.790 baseband/turbo_sw: not in enabled drivers build config 00:02:40.790 gpu/cuda: not in enabled drivers build config 00:02:40.790 00:02:40.790 00:02:40.790 Build targets in project: 309 00:02:40.790 00:02:40.790 DPDK 22.11.4 00:02:40.790 00:02:40.790 User defined options 00:02:40.790 libdir : lib 00:02:40.790 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:40.790 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:40.790 c_link_args : 00:02:40.790 enable_docs : false 00:02:40.790 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:40.790 enable_kmods : false 00:02:40.790 machine : native 00:02:40.790 tests : false 00:02:40.790 00:02:40.790 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.790 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:41.051 10:34:40 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:41.051 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:41.051 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:41.051 [2/738] Generating lib/rte_telemetry_def with a custom command 00:02:41.051 [3/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:41.051 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:41.051 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.051 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.051 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.051 [8/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.051 [9/738] Linking static target lib/librte_kvargs.a 00:02:41.051 [10/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.051 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.051 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.051 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.312 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.312 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.312 [16/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.312 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.312 [18/738] Linking target lib/librte_kvargs.so.23.0 00:02:41.312 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.312 [20/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.312 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.312 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.312 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:41.312 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.312 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:41.574 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:41.574 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:41.574 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.574 [29/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.574 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:41.574 [31/738] Linking static target lib/librte_telemetry.a 00:02:41.574 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:41.574 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.574 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:41.574 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.574 [36/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:41.574 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:41.574 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.574 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.574 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.574 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:41.835 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.835 [43/738] Linking target lib/librte_telemetry.so.23.0 00:02:41.835 [44/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.835 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.835 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.835 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.835 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.835 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.835 [50/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:42.096 [51/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.096 [52/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.096 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.096 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.096 [55/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.096 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.096 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.096 [58/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.096 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.096 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.096 [61/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.096 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.096 [63/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.096 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.096 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.096 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:42.096 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.096 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.096 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.096 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.357 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.357 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.357 [73/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.357 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.357 [75/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.357 [76/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.357 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.357 [78/738] Generating lib/rte_eal_def with a custom command 00:02:42.357 [79/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.357 [80/738] Generating lib/rte_eal_mingw with a custom command 00:02:42.357 [81/738] Generating lib/rte_ring_def with a custom command 00:02:42.357 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:42.357 [83/738] Generating lib/rte_rcu_def with a custom command 00:02:42.357 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:42.357 [85/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.357 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.357 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.357 [88/738] Linking static target lib/librte_ring.a 00:02:42.619 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.619 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:42.619 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:02:42.619 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.619 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.619 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.619 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:42.619 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:42.619 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:42.619 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.619 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.877 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.877 [101/738] Linking static target lib/librte_eal.a 00:02:42.877 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.877 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.135 [104/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.135 [105/738] Linking static target lib/librte_rcu.a 00:02:43.135 [106/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.135 [107/738] Linking static target lib/librte_mempool.a 00:02:43.135 [108/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.135 [109/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.135 [110/738] Generating lib/rte_net_def with a custom command 00:02:43.135 [111/738] Generating lib/rte_net_mingw with a custom command 00:02:43.135 [112/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.135 [113/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.135 [114/738] Generating lib/rte_meter_def with a custom command 00:02:43.135 [115/738] Generating lib/rte_meter_mingw with a custom command 00:02:43.135 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.135 [117/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.393 [118/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.393 [119/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.393 [120/738] Linking static target lib/librte_meter.a 00:02:43.393 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.393 [122/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.393 [123/738] Linking static target lib/librte_mbuf.a 00:02:43.654 [124/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:43.654 [125/738] Linking static target lib/librte_net.a 00:02:43.654 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.654 [127/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:43.654 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:43.654 [129/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.654 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.654 [131/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.654 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.913 [133/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.913 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.171 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.171 [136/738] Generating lib/rte_ethdev_def with a custom command 00:02:44.171 [137/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.171 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.171 [139/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:44.171 [140/738] Generating lib/rte_pci_def with a custom command 00:02:44.171 [141/738] Generating lib/rte_pci_mingw with a custom command 00:02:44.171 [142/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.171 [143/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.171 [144/738] Linking static target lib/librte_pci.a 00:02:44.171 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.171 [146/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.171 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.171 [148/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.171 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.429 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.429 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.429 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.429 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.429 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:44.429 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.429 [156/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:44.429 [157/738] Generating lib/rte_cmdline_def with a custom command 00:02:44.429 [158/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.429 [159/738] Generating lib/rte_metrics_def with a custom command 00:02:44.429 [160/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.429 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:02:44.429 [162/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.429 [163/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:44.429 [164/738] Generating lib/rte_hash_def with a custom command 00:02:44.429 [165/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.429 [166/738] Generating lib/rte_hash_mingw with a custom command 00:02:44.429 [167/738] Generating lib/rte_timer_def with a custom command 00:02:44.687 [168/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.687 [169/738] Generating lib/rte_timer_mingw with a custom command 00:02:44.687 [170/738] Linking static target lib/librte_cmdline.a 00:02:44.687 [171/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.687 [172/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:44.687 [173/738] Linking static target lib/librte_metrics.a 00:02:44.687 [174/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:44.945 [175/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.945 [176/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.945 [177/738] Linking static target lib/librte_timer.a 00:02:44.945 [178/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.202 [179/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:45.202 [180/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:45.202 [181/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.203 [182/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:45.203 [183/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.203 [184/738] Generating lib/rte_acl_def with a custom command 00:02:45.203 [185/738] Generating lib/rte_acl_mingw with a custom command 00:02:45.203 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:45.203 [187/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:45.203 [188/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:45.203 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:45.460 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:45.460 [191/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.460 [192/738] Linking static target lib/librte_ethdev.a 00:02:45.460 [193/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:45.460 [194/738] Linking static target lib/librte_bitratestats.a 00:02:45.717 [195/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:45.717 [196/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.717 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:45.717 [198/738] Linking static target lib/librte_bbdev.a 00:02:45.975 [199/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:45.975 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:45.975 [201/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.233 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:46.233 [203/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:46.491 [204/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:46.491 [205/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:46.749 [206/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:46.749 [207/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.749 [208/738] Linking static target lib/librte_hash.a 00:02:46.749 [209/738] Generating lib/rte_bpf_def with a custom command 00:02:46.749 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:02:46.749 [211/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:46.749 [212/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:46.749 [213/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:46.749 [214/738] Generating lib/rte_cfgfile_def with a custom command 00:02:47.007 [215/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:47.007 [216/738] Linking static target lib/librte_cfgfile.a 00:02:47.007 [217/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:47.007 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:47.007 [219/738] Generating lib/rte_compressdev_def with a custom command 00:02:47.007 [220/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:47.007 [221/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:47.007 [222/738] Linking static target lib/librte_bpf.a 00:02:47.007 [223/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:47.007 [224/738] Linking static target lib/librte_acl.a 00:02:47.265 [225/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.265 [226/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.265 [227/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.265 [228/738] Generating lib/rte_cryptodev_def with a custom command 00:02:47.265 [229/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:47.265 [230/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.265 [231/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.265 [232/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.265 [233/738] Generating lib/rte_distributor_def with a custom command 00:02:47.265 [234/738] Generating lib/rte_distributor_mingw with a custom command 00:02:47.522 [235/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.522 [236/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.522 [237/738] Linking static target lib/librte_compressdev.a 00:02:47.522 [238/738] Generating lib/rte_efd_def with a custom command 00:02:47.522 [239/738] Generating lib/rte_efd_mingw with a custom command 00:02:47.522 [240/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:47.522 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:47.522 [242/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.780 [243/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.780 [244/738] Linking target lib/librte_eal.so.23.0 00:02:47.780 [245/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:47.780 [246/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:47.780 [247/738] Linking target lib/librte_ring.so.23.0 00:02:48.037 [248/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:48.037 [249/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:48.037 [250/738] Linking target lib/librte_meter.so.23.0 00:02:48.037 [251/738] Linking target lib/librte_pci.so.23.0 00:02:48.037 [252/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.037 [253/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:48.037 [254/738] Linking target lib/librte_timer.so.23.0 00:02:48.037 [255/738] Linking target lib/librte_rcu.so.23.0 00:02:48.037 [256/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:48.037 [257/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:48.037 [258/738] Linking target lib/librte_mempool.so.23.0 00:02:48.037 [259/738] Linking target lib/librte_acl.so.23.0 00:02:48.037 [260/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:48.037 [261/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:48.037 [262/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:48.037 [263/738] Linking static target lib/librte_distributor.a 00:02:48.037 [264/738] Linking target lib/librte_cfgfile.so.23.0 00:02:48.037 [265/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:48.037 [266/738] Linking target lib/librte_mbuf.so.23.0 00:02:48.294 [267/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:48.294 [268/738] Linking target lib/librte_net.so.23.0 00:02:48.294 [269/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.294 [270/738] Linking target lib/librte_bbdev.so.23.0 00:02:48.294 [271/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:48.294 [272/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:48.294 [273/738] Linking target lib/librte_compressdev.so.23.0 00:02:48.294 [274/738] Linking target lib/librte_cmdline.so.23.0 00:02:48.294 [275/738] Linking target lib/librte_distributor.so.23.0 00:02:48.294 [276/738] Linking target lib/librte_hash.so.23.0 00:02:48.294 [277/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:48.552 [278/738] Generating lib/rte_eventdev_def with a custom command 00:02:48.552 [279/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:48.552 [280/738] Generating lib/rte_gpudev_def with a custom command 00:02:48.552 [281/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:48.552 [282/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:48.552 [283/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:48.552 [284/738] Linking static target lib/librte_efd.a 00:02:48.552 [285/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.869 [286/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.869 [287/738] Linking target lib/librte_ethdev.so.23.0 00:02:48.869 [288/738] Linking target lib/librte_efd.so.23.0 00:02:48.869 [289/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:48.869 [290/738] Linking target lib/librte_metrics.so.23.0 00:02:48.869 [291/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:48.869 [292/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:48.869 [293/738] Linking target lib/librte_bpf.so.23.0 00:02:48.869 [294/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:48.869 [295/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:48.869 [296/738] Generating lib/rte_gro_def with a custom command 00:02:48.869 [297/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:48.869 [298/738] Generating lib/rte_gro_mingw with a custom command 00:02:48.869 [299/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:48.869 [300/738] Linking target lib/librte_bitratestats.so.23.0 00:02:49.128 [301/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.128 [302/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:49.128 [303/738] Linking static target lib/librte_gpudev.a 00:02:49.128 [304/738] Linking static target lib/librte_cryptodev.a 00:02:49.128 [305/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:49.128 [306/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:49.387 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:49.387 [308/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:49.387 [309/738] Linking static target lib/librte_gro.a 00:02:49.387 [310/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:49.387 [311/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:49.387 [312/738] Generating lib/rte_gso_def with a custom command 00:02:49.387 [313/738] Generating lib/rte_gso_mingw with a custom command 00:02:49.387 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:49.387 [315/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.387 [316/738] Linking target lib/librte_gro.so.23.0 00:02:49.646 [317/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:49.646 [318/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:49.646 [319/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.646 [320/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:49.646 [321/738] Linking static target lib/librte_gso.a 00:02:49.646 [322/738] Linking target lib/librte_gpudev.so.23.0 00:02:49.646 [323/738] Generating lib/rte_ip_frag_def with a custom command 00:02:49.646 [324/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:49.646 [325/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:49.646 [326/738] Linking static target lib/librte_eventdev.a 00:02:49.646 [327/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.646 [328/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:49.646 [329/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:49.646 [330/738] Generating lib/rte_jobstats_def with a custom command 00:02:49.646 [331/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:49.904 [332/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:49.904 [333/738] Linking target lib/librte_gso.so.23.0 00:02:49.904 [334/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:49.904 [335/738] Linking static target lib/librte_jobstats.a 00:02:49.904 [336/738] Generating lib/rte_latencystats_def with a custom command 00:02:49.904 [337/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:49.904 [338/738] Generating lib/rte_lpm_def with a custom command 00:02:49.904 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:02:49.904 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:49.904 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:49.904 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:49.904 [343/738] Linking static target lib/librte_ip_frag.a 00:02:49.904 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.904 [345/738] Linking target lib/librte_jobstats.so.23.0 00:02:50.163 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:50.163 [347/738] Linking static target lib/librte_latencystats.a 00:02:50.163 [348/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.163 [349/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:50.163 [350/738] Linking target lib/librte_ip_frag.so.23.0 00:02:50.163 [351/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:50.163 [352/738] Generating lib/rte_member_def with a custom command 00:02:50.163 [353/738] Generating lib/rte_member_mingw with a custom command 00:02:50.163 [354/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.422 [355/738] Linking target lib/librte_latencystats.so.23.0 00:02:50.422 [356/738] Generating lib/rte_pcapng_def with a custom command 00:02:50.422 [357/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:50.422 [358/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:50.422 [359/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.422 [360/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.422 [361/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.422 [362/738] Linking target lib/librte_cryptodev.so.23.0 00:02:50.422 [363/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:50.422 [364/738] Linking static target lib/librte_lpm.a 00:02:50.422 [365/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:50.679 [366/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:50.679 [367/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.679 [368/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:50.679 [369/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.679 [370/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:50.679 [371/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:50.679 [372/738] Linking static target lib/librte_pcapng.a 00:02:50.679 [373/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.679 [374/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:50.679 [375/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:50.679 [376/738] Linking target lib/librte_lpm.so.23.0 00:02:50.937 [377/738] Generating lib/rte_power_def with a custom command 00:02:50.937 [378/738] Generating lib/rte_power_mingw with a custom command 00:02:50.937 [379/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:50.937 [380/738] Generating lib/rte_rawdev_def with a custom command 00:02:50.937 [381/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:50.937 [382/738] Generating lib/rte_regexdev_def with a custom command 00:02:50.937 [383/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:50.937 [384/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:50.937 [385/738] Generating lib/rte_dmadev_def with a custom command 00:02:50.937 [386/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:50.937 [387/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.937 [388/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.937 [389/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:50.937 [390/738] Linking target lib/librte_pcapng.so.23.0 00:02:50.937 [391/738] Linking target lib/librte_eventdev.so.23.0 00:02:50.937 [392/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:50.937 [393/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:50.937 [394/738] Linking static target lib/librte_rawdev.a 00:02:51.195 [395/738] Generating lib/rte_rib_def with a custom command 00:02:51.195 [396/738] Generating lib/rte_rib_mingw with a custom command 00:02:51.195 [397/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:51.195 [398/738] Generating lib/rte_reorder_def with a custom command 00:02:51.195 [399/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:51.195 [400/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.195 [401/738] Linking static target lib/librte_power.a 00:02:51.195 [402/738] Generating lib/rte_reorder_mingw with a custom command 00:02:51.195 [403/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:51.195 [404/738] Linking static target lib/librte_regexdev.a 00:02:51.195 [405/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:51.195 [406/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.195 [407/738] Linking static target lib/librte_dmadev.a 00:02:51.453 [408/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:51.453 [409/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:51.453 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:51.453 [411/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.453 [412/738] Generating lib/rte_sched_mingw with a custom command 00:02:51.453 [413/738] Generating lib/rte_sched_def with a custom command 00:02:51.453 [414/738] Linking target lib/librte_rawdev.so.23.0 00:02:51.453 [415/738] Generating lib/rte_security_def with a custom command 00:02:51.453 [416/738] Generating lib/rte_security_mingw with a custom command 00:02:51.453 [417/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:51.453 [418/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.453 [419/738] Linking static target lib/librte_reorder.a 00:02:51.453 [420/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:51.453 [421/738] Generating lib/rte_stack_def with a custom command 00:02:51.453 [422/738] Generating lib/rte_stack_mingw with a custom command 00:02:51.453 [423/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:51.453 [424/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:51.453 [425/738] Linking static target lib/librte_rib.a 00:02:51.453 [426/738] Linking static target lib/librte_stack.a 00:02:51.711 [427/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:51.711 [428/738] Linking static target lib/librte_member.a 00:02:51.711 [429/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.711 [430/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.711 [431/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.711 [432/738] Linking target lib/librte_dmadev.so.23.0 00:02:51.711 [433/738] Linking target lib/librte_reorder.so.23.0 00:02:51.711 [434/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.711 [435/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.711 [436/738] Linking target lib/librte_stack.so.23.0 00:02:51.711 [437/738] Linking target lib/librte_regexdev.so.23.0 00:02:51.711 [438/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:51.711 [439/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.711 [440/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.711 [441/738] Linking target lib/librte_power.so.23.0 00:02:51.711 [442/738] Linking static target lib/librte_security.a 00:02:51.969 [443/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.969 [444/738] Linking target lib/librte_member.so.23.0 00:02:51.969 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.969 [446/738] Linking target lib/librte_rib.so.23.0 00:02:51.969 [447/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.969 [448/738] Generating lib/rte_vhost_def with a custom command 00:02:51.969 [449/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:51.969 [450/738] Generating lib/rte_vhost_mingw with a custom command 00:02:52.226 [451/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.226 [452/738] Linking target lib/librte_security.so.23.0 00:02:52.226 [453/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:52.226 [454/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:52.226 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:52.484 [456/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:52.484 [457/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:52.484 [458/738] Generating lib/rte_ipsec_def with a custom command 00:02:52.484 [459/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:52.484 [460/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:52.484 [461/738] Linking static target lib/librte_sched.a 00:02:52.742 [462/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:52.742 [463/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.742 [464/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:52.742 [465/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:52.742 [466/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.000 [467/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.000 [468/738] Linking target lib/librte_sched.so.23.0 00:02:53.000 [469/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:53.000 [470/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:53.000 [471/738] Generating lib/rte_fib_def with a custom command 00:02:53.000 [472/738] Generating lib/rte_fib_mingw with a custom command 00:02:53.000 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:53.257 [474/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:53.257 [475/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:53.257 [476/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:53.515 [477/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:53.515 [478/738] Linking static target lib/librte_fib.a 00:02:53.515 [479/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:53.515 [480/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:53.773 [481/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.773 [482/738] Linking static target lib/librte_ipsec.a 00:02:53.773 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:53.773 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:53.773 [485/738] Linking target lib/librte_fib.so.23.0 00:02:53.773 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:53.773 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:54.030 [488/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.030 [489/738] Linking target lib/librte_ipsec.so.23.0 00:02:54.030 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:54.287 [491/738] Generating lib/rte_port_def with a custom command 00:02:54.287 [492/738] Generating lib/rte_port_mingw with a custom command 00:02:54.288 [493/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:54.288 [494/738] Generating lib/rte_pdump_def with a custom command 00:02:54.288 [495/738] Generating lib/rte_pdump_mingw with a custom command 00:02:54.288 [496/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:54.288 [497/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:54.288 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:54.545 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:54.545 [500/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:54.545 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:54.545 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:54.545 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:54.545 [504/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:54.545 [505/738] Linking static target lib/librte_port.a 00:02:54.802 [506/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:54.802 [507/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:54.802 [508/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:54.802 [509/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.060 [510/738] Linking target lib/librte_port.so.23.0 00:02:55.060 [511/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:55.060 [512/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:55.060 [513/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:55.060 [514/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:55.060 [515/738] Linking static target lib/librte_pdump.a 00:02:55.317 [516/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.317 [517/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:55.317 [518/738] Linking target lib/librte_pdump.so.23.0 00:02:55.317 [519/738] Generating lib/rte_table_def with a custom command 00:02:55.317 [520/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:55.317 [521/738] Generating lib/rte_table_mingw with a custom command 00:02:55.317 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:55.616 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:55.616 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:55.616 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:55.616 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:55.616 [527/738] Generating lib/rte_pipeline_def with a custom command 00:02:55.616 [528/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:55.616 [529/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:55.616 [530/738] Linking static target lib/librte_table.a 00:02:55.616 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:55.920 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:55.920 [533/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.920 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:56.179 [535/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.179 [536/738] Linking target lib/librte_table.so.23.0 00:02:56.179 [537/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:56.179 [538/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:56.179 [539/738] Generating lib/rte_graph_def with a custom command 00:02:56.179 [540/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:56.179 [541/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:56.179 [542/738] Generating lib/rte_graph_mingw with a custom command 00:02:56.438 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:56.438 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:56.438 [545/738] Linking static target lib/librte_graph.a 00:02:56.438 [546/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:56.438 [547/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.696 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:56.696 [549/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:56.696 [550/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:56.696 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:56.696 [552/738] Generating lib/rte_node_def with a custom command 00:02:56.696 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:56.954 [554/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:56.954 [555/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.954 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.954 [557/738] Linking target lib/librte_graph.so.23.0 00:02:56.954 [558/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:56.954 [559/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.954 [560/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.213 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.213 [562/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:57.213 [563/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:57.213 [564/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:57.213 [565/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.213 [566/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:57.213 [567/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:57.213 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:57.213 [569/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.213 [570/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:57.213 [571/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:57.213 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:57.213 [573/738] Linking static target lib/librte_node.a 00:02:57.213 [574/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.213 [575/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.213 [576/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.471 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.471 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.471 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.471 [580/738] Linking target lib/librte_node.so.23.0 00:02:57.471 [581/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.471 [582/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.471 [583/738] Linking static target drivers/librte_bus_vdev.a 00:02:57.471 [584/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.471 [585/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.471 [586/738] Linking static target drivers/librte_bus_pci.a 00:02:57.729 [587/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.729 [588/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.729 [589/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.729 [590/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:57.729 [591/738] Linking target drivers/librte_bus_vdev.so.23.0 00:02:57.729 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:57.729 [593/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:57.987 [594/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:57.987 [595/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.987 [596/738] Linking target drivers/librte_bus_pci.so.23.0 00:02:57.987 [597/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.987 [598/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.987 [599/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:57.987 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:57.987 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.246 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.246 [603/738] Linking static target drivers/librte_mempool_ring.a 00:02:58.246 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.246 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:02:58.246 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:58.504 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:58.504 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:58.504 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:58.762 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:59.021 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:59.021 [612/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:59.280 [613/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:59.280 [614/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:59.280 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:59.280 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:59.280 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:02:59.538 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:59.538 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:59.797 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:00.055 [621/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:00.055 [622/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:00.313 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:00.313 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:00.313 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:00.313 [626/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:00.571 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:00.571 [628/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:00.571 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:00.571 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:00.571 [631/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:00.830 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:01.087 [633/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:01.087 [634/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:01.087 [635/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:01.087 [636/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:01.087 [637/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:01.088 [638/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:01.346 [639/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:01.346 [640/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:01.346 [641/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:01.346 [642/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.346 [643/738] Linking static target drivers/librte_net_i40e.a 00:03:01.346 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:01.346 [645/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.604 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:01.604 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:01.604 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:01.869 [649/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.869 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:01.869 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:01.869 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:01.869 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:01.869 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:01.869 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:01.869 [656/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:02.128 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:02.128 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:02.128 [659/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:02.128 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:02.387 [661/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:02.387 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:02.387 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:02.644 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:02.644 [665/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.644 [666/738] Linking static target lib/librte_vhost.a 00:03:02.902 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:02.902 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:02.902 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.160 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.160 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:03.160 [672/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.160 [673/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:03.160 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.160 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:03.418 [676/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.418 [677/738] Linking target lib/librte_vhost.so.23.0 00:03:03.418 [678/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:03.418 [679/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:03.418 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:03.677 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:03.677 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:03.677 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:03.677 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:03.677 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:03.677 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:03.936 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:03.936 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:03.936 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:04.194 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:04.194 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:04.194 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:04.194 [693/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.453 [694/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:04.453 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:04.711 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:04.711 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:04.711 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:04.711 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:04.969 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:04.969 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:04.969 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:05.227 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:05.227 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:05.227 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:05.485 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:05.485 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:05.743 [708/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:05.743 [709/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.002 [710/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:06.002 [711/738] Linking static target lib/librte_pipeline.a 00:03:06.002 [712/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:06.002 [713/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:06.002 [714/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:06.002 [715/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:06.260 [716/738] Linking target app/dpdk-pdump 00:03:06.260 [717/738] Linking target app/dpdk-dumpcap 00:03:06.260 [718/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:06.260 [719/738] Linking target app/dpdk-proc-info 00:03:06.260 [720/738] Linking target app/dpdk-test-acl 00:03:06.260 [721/738] Linking target app/dpdk-test-bbdev 00:03:06.260 [722/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:06.260 [723/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:06.519 [724/738] Linking target app/dpdk-test-cmdline 00:03:06.519 [725/738] Linking target app/dpdk-test-compress-perf 00:03:06.519 [726/738] Linking target app/dpdk-test-crypto-perf 00:03:06.519 [727/738] Linking target app/dpdk-test-fib 00:03:06.519 [728/738] Linking target app/dpdk-test-eventdev 00:03:06.519 [729/738] Linking target app/dpdk-test-flow-perf 00:03:06.519 [730/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:06.519 [731/738] Linking target app/dpdk-test-gpudev 00:03:06.777 [732/738] Linking target app/dpdk-test-pipeline 00:03:06.777 [733/738] Linking target app/dpdk-testpmd 00:03:06.777 [734/738] Linking target app/dpdk-test-regex 00:03:06.777 [735/738] Linking target app/dpdk-test-sad 00:03:06.777 [736/738] Linking target app/dpdk-test-security-perf 00:03:09.311 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.311 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:09.311 10:35:08 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:09.311 10:35:08 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:09.311 10:35:08 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:09.311 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:09.311 [0/1] Installing files. 00:03:09.311 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.311 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.313 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.314 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.315 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.316 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.316 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.577 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.578 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.578 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.578 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.578 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.578 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.581 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.581 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.581 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.581 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:09.581 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:09.581 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:09.581 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:09.581 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:09.581 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:09.581 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:09.581 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:09.581 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:09.581 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:09.581 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:09.581 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:09.581 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:09.581 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:09.581 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:09.581 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:09.581 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:09.581 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:09.581 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:09.581 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:09.581 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:09.581 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:09.581 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:09.581 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:09.581 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:09.581 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:09.581 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:09.581 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:09.581 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:09.581 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:09.581 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:09.581 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:09.581 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:09.581 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:09.581 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:09.581 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:09.581 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:09.581 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:09.581 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:09.581 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:09.581 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:09.581 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:09.581 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:09.581 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:09.581 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:09.581 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:09.581 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:09.581 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:09.581 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:09.581 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:09.581 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:09.840 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:09.840 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:09.840 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:09.840 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:09.840 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:09.840 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:09.840 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:09.840 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:09.840 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:09.840 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:09.840 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:09.840 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:09.840 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:09.840 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:09.840 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:09.840 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:09.840 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:09.840 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:09.840 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:09.840 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:09.840 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:09.840 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:09.840 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:09.840 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:09.840 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:09.840 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:09.840 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:09.840 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:09.840 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:09.840 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:09.840 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:09.840 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:09.840 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:09.840 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:09.840 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:09.840 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:09.840 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:09.840 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:09.840 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:09.840 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:09.840 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:09.840 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:09.840 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:09.840 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:09.840 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:09.840 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:09.840 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:09.840 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:09.840 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:09.840 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:09.840 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:09.840 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:09.840 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:09.841 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:09.841 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:09.841 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:09.841 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:09.841 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:09.841 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:09.841 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:09.841 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:09.841 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:09.841 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:09.841 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:09.841 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:09.841 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:09.841 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:09.841 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:09.841 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:09.841 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:09.841 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:09.841 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:09.841 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:09.841 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:09.841 10:35:09 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:09.841 10:35:09 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:09.841 00:03:09.841 real 0m34.300s 00:03:09.841 user 3m43.282s 00:03:09.841 sys 0m33.975s 00:03:09.841 10:35:09 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:09.841 10:35:09 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:09.841 ************************************ 00:03:09.841 END TEST build_native_dpdk 00:03:09.841 ************************************ 00:03:09.841 10:35:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.841 10:35:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.841 10:35:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.841 10:35:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.841 10:35:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.841 10:35:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.841 10:35:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.841 10:35:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:09.841 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:10.099 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.099 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:10.099 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:10.359 Using 'verbs' RDMA provider 00:03:21.261 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:31.272 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:31.844 Creating mk/config.mk...done. 00:03:31.844 Creating mk/cc.flags.mk...done. 00:03:31.844 Type 'make' to build. 00:03:31.844 10:35:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:31.844 10:35:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:31.844 10:35:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:31.844 10:35:31 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.844 ************************************ 00:03:31.844 START TEST make 00:03:31.844 ************************************ 00:03:31.844 10:35:31 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:32.106 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:32.106 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:32.106 meson setup builddir \ 00:03:32.106 -Dwith-libaio=enabled \ 00:03:32.106 -Dwith-liburing=enabled \ 00:03:32.106 -Dwith-libvfn=disabled \ 00:03:32.106 -Dwith-spdk=false && \ 00:03:32.106 meson compile -C builddir && \ 00:03:32.106 cd -) 00:03:32.106 make[1]: Nothing to be done for 'all'. 00:03:34.024 The Meson build system 00:03:34.024 Version: 1.5.0 00:03:34.024 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:34.024 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:34.024 Build type: native build 00:03:34.024 Project name: xnvme 00:03:34.024 Project version: 0.7.3 00:03:34.024 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:34.024 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:34.024 Host machine cpu family: x86_64 00:03:34.024 Host machine cpu: x86_64 00:03:34.024 Message: host_machine.system: linux 00:03:34.024 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:34.024 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:34.024 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:34.024 Run-time dependency threads found: YES 00:03:34.024 Has header "setupapi.h" : NO 00:03:34.024 Has header "linux/blkzoned.h" : YES 00:03:34.024 Has header "linux/blkzoned.h" : YES (cached) 00:03:34.024 Has header "libaio.h" : YES 00:03:34.024 Library aio found: YES 00:03:34.024 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:34.024 Run-time dependency liburing found: YES 2.2 00:03:34.024 Dependency libvfn skipped: feature with-libvfn disabled 00:03:34.024 Run-time dependency appleframeworks found: NO (tried framework) 00:03:34.024 Run-time dependency appleframeworks found: NO (tried framework) 00:03:34.024 Configuring xnvme_config.h using configuration 00:03:34.024 Configuring xnvme.spec using configuration 00:03:34.024 Run-time dependency bash-completion found: YES 2.11 00:03:34.024 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:34.024 Program cp found: YES (/usr/bin/cp) 00:03:34.024 Has header "winsock2.h" : NO 00:03:34.024 Has header "dbghelp.h" : NO 00:03:34.024 Library rpcrt4 found: NO 00:03:34.024 Library rt found: YES 00:03:34.024 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:34.024 Found CMake: /usr/bin/cmake (3.27.7) 00:03:34.024 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:34.024 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:34.024 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:34.024 Build targets in project: 32 00:03:34.024 00:03:34.024 xnvme 0.7.3 00:03:34.024 00:03:34.024 User defined options 00:03:34.024 with-libaio : enabled 00:03:34.024 with-liburing: enabled 00:03:34.024 with-libvfn : disabled 00:03:34.024 with-spdk : false 00:03:34.024 00:03:34.024 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.599 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:34.599 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:34.599 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:34.599 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:34.599 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:34.599 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:34.599 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:34.599 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:34.599 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:34.599 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:34.599 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:34.599 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:34.599 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:34.599 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:34.599 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:34.599 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:34.599 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:34.599 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:34.599 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:34.599 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:34.599 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:34.861 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:34.861 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:34.861 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:34.861 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:34.861 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:34.861 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:34.861 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:34.861 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:34.861 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:34.861 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:34.861 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:34.861 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:34.861 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:34.861 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:34.861 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:34.861 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:34.861 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:34.861 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:34.861 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:34.861 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:34.861 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:34.861 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:34.861 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:34.861 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:34.861 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:34.861 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:34.861 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:34.861 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:34.861 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:34.861 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:34.861 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:34.861 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:34.861 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:34.861 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:34.861 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:34.861 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:35.122 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:35.122 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:35.122 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:35.122 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:35.122 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:35.122 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:35.122 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:35.122 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:35.122 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:35.122 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:35.122 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:35.122 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:35.122 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:35.122 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:35.122 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:35.122 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:35.122 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:35.122 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:35.122 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:35.122 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:35.388 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:35.388 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:35.388 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:35.388 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:35.388 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:35.388 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:35.388 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:35.388 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:35.388 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:35.388 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:35.388 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:35.388 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:35.388 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:35.388 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:35.388 [91/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:35.388 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:35.388 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:35.388 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:35.388 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:35.646 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:35.646 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:35.646 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:35.646 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:35.646 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:35.646 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:35.646 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:35.646 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:35.646 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:35.646 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:35.646 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:35.646 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:35.646 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:35.646 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:35.646 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:35.646 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:35.646 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:35.646 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:35.646 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:35.646 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:35.646 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:35.646 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:35.646 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:35.646 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:35.646 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:35.646 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:35.646 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:35.646 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:35.646 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:35.646 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:35.646 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:35.646 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:35.646 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:35.905 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:35.905 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:35.905 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:35.905 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:35.905 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:35.905 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:35.905 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:35.905 [136/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:35.905 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:35.905 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:35.905 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:35.905 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:35.905 [141/203] Linking target lib/libxnvme.so 00:03:35.905 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:35.905 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:35.905 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:35.905 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:35.905 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:36.164 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:36.164 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:36.164 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:36.164 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:36.164 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:36.164 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:36.164 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:36.164 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:36.164 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:36.164 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:36.164 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:36.164 [158/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:36.164 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:36.164 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:36.164 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:36.164 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:36.422 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:36.422 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:36.422 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:36.422 [166/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:36.422 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:36.422 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:36.422 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:36.422 [170/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:36.422 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:36.422 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:36.422 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:36.422 [174/203] Linking static target lib/libxnvme.a 00:03:36.422 [175/203] Linking target tests/xnvme_tests_cli 00:03:36.680 [176/203] Linking target tests/xnvme_tests_async_intf 00:03:36.680 [177/203] Linking target tests/xnvme_tests_lblk 00:03:36.680 [178/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:36.680 [179/203] Linking target tests/xnvme_tests_scc 00:03:36.680 [180/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:36.680 [181/203] Linking target tests/xnvme_tests_buf 00:03:36.680 [182/203] Linking target tests/xnvme_tests_ioworker 00:03:36.680 [183/203] Linking target tests/xnvme_tests_znd_append 00:03:36.680 [184/203] Linking target tests/xnvme_tests_znd_state 00:03:36.680 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:03:36.680 [186/203] Linking target tests/xnvme_tests_enum 00:03:36.680 [187/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:36.680 [188/203] Linking target tests/xnvme_tests_kvs 00:03:36.680 [189/203] Linking target tests/xnvme_tests_map 00:03:36.680 [190/203] Linking target tools/lblk 00:03:36.680 [191/203] Linking target tools/xnvme 00:03:36.680 [192/203] Linking target tools/xdd 00:03:36.680 [193/203] Linking target tools/kvs 00:03:36.680 [194/203] Linking target examples/xnvme_dev 00:03:36.680 [195/203] Linking target tools/xnvme_file 00:03:36.680 [196/203] Linking target examples/xnvme_enum 00:03:36.680 [197/203] Linking target tools/zoned 00:03:36.680 [198/203] Linking target examples/xnvme_io_async 00:03:36.680 [199/203] Linking target examples/xnvme_hello 00:03:36.680 [200/203] Linking target examples/xnvme_single_sync 00:03:36.680 [201/203] Linking target examples/xnvme_single_async 00:03:36.680 [202/203] Linking target examples/zoned_io_async 00:03:36.680 [203/203] Linking target examples/zoned_io_sync 00:03:36.680 INFO: autodetecting backend as ninja 00:03:36.680 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:36.680 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:08.803 CC lib/log/log_flags.o 00:04:08.803 CC lib/log/log.o 00:04:08.803 CC lib/log/log_deprecated.o 00:04:08.803 CC lib/ut/ut.o 00:04:08.803 CC lib/ut_mock/mock.o 00:04:08.803 LIB libspdk_ut.a 00:04:08.803 LIB libspdk_ut_mock.a 00:04:08.803 LIB libspdk_log.a 00:04:08.803 SO libspdk_ut.so.2.0 00:04:08.803 SO libspdk_ut_mock.so.6.0 00:04:08.803 SO libspdk_log.so.7.0 00:04:08.803 SYMLINK libspdk_ut.so 00:04:08.803 SYMLINK libspdk_ut_mock.so 00:04:08.803 SYMLINK libspdk_log.so 00:04:08.803 CC lib/ioat/ioat.o 00:04:08.803 CXX lib/trace_parser/trace.o 00:04:08.803 CC lib/dma/dma.o 00:04:08.803 CC lib/util/bit_array.o 00:04:08.803 CC lib/util/base64.o 00:04:08.803 CC lib/util/cpuset.o 00:04:08.803 CC lib/util/crc16.o 00:04:08.803 CC lib/util/crc32.o 00:04:08.803 CC lib/util/crc32c.o 00:04:08.803 CC lib/vfio_user/host/vfio_user_pci.o 00:04:08.803 CC lib/vfio_user/host/vfio_user.o 00:04:08.803 CC lib/util/crc32_ieee.o 00:04:08.803 CC lib/util/crc64.o 00:04:08.803 CC lib/util/dif.o 00:04:08.803 CC lib/util/fd.o 00:04:08.803 CC lib/util/fd_group.o 00:04:08.803 LIB libspdk_dma.a 00:04:08.803 CC lib/util/file.o 00:04:08.803 SO libspdk_dma.so.5.0 00:04:08.803 CC lib/util/hexlify.o 00:04:08.803 SYMLINK libspdk_dma.so 00:04:08.803 CC lib/util/iov.o 00:04:08.803 CC lib/util/math.o 00:04:08.803 LIB libspdk_ioat.a 00:04:08.803 CC lib/util/net.o 00:04:08.803 LIB libspdk_vfio_user.a 00:04:08.803 SO libspdk_ioat.so.7.0 00:04:08.803 SO libspdk_vfio_user.so.5.0 00:04:08.803 CC lib/util/pipe.o 00:04:08.803 CC lib/util/strerror_tls.o 00:04:08.803 SYMLINK libspdk_ioat.so 00:04:08.803 CC lib/util/string.o 00:04:08.803 SYMLINK libspdk_vfio_user.so 00:04:08.803 CC lib/util/uuid.o 00:04:08.803 CC lib/util/xor.o 00:04:08.803 CC lib/util/zipf.o 00:04:08.803 CC lib/util/md5.o 00:04:08.803 LIB libspdk_util.a 00:04:08.803 LIB libspdk_trace_parser.a 00:04:08.803 SO libspdk_util.so.10.0 00:04:08.803 SO libspdk_trace_parser.so.6.0 00:04:08.803 SYMLINK libspdk_trace_parser.so 00:04:08.803 SYMLINK libspdk_util.so 00:04:08.803 CC lib/rdma_utils/rdma_utils.o 00:04:08.803 CC lib/conf/conf.o 00:04:08.803 CC lib/rdma_provider/common.o 00:04:08.803 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:08.803 CC lib/vmd/led.o 00:04:08.803 CC lib/vmd/vmd.o 00:04:08.803 CC lib/env_dpdk/env.o 00:04:08.803 CC lib/idxd/idxd.o 00:04:08.803 CC lib/env_dpdk/memory.o 00:04:08.803 CC lib/json/json_parse.o 00:04:08.803 CC lib/json/json_util.o 00:04:08.803 CC lib/json/json_write.o 00:04:08.803 LIB libspdk_rdma_provider.a 00:04:08.803 SO libspdk_rdma_provider.so.6.0 00:04:08.803 CC lib/env_dpdk/pci.o 00:04:08.803 SYMLINK libspdk_rdma_provider.so 00:04:08.803 CC lib/env_dpdk/init.o 00:04:08.803 LIB libspdk_rdma_utils.a 00:04:08.803 LIB libspdk_conf.a 00:04:08.803 SO libspdk_rdma_utils.so.1.0 00:04:08.803 SO libspdk_conf.so.6.0 00:04:08.803 SYMLINK libspdk_rdma_utils.so 00:04:08.803 CC lib/env_dpdk/threads.o 00:04:08.803 SYMLINK libspdk_conf.so 00:04:08.803 CC lib/env_dpdk/pci_ioat.o 00:04:08.803 CC lib/env_dpdk/pci_virtio.o 00:04:08.803 CC lib/env_dpdk/pci_vmd.o 00:04:08.803 LIB libspdk_json.a 00:04:08.803 CC lib/env_dpdk/pci_idxd.o 00:04:08.803 CC lib/env_dpdk/pci_event.o 00:04:08.803 SO libspdk_json.so.6.0 00:04:08.803 CC lib/env_dpdk/sigbus_handler.o 00:04:08.803 CC lib/idxd/idxd_user.o 00:04:08.803 SYMLINK libspdk_json.so 00:04:08.803 CC lib/idxd/idxd_kernel.o 00:04:08.803 CC lib/env_dpdk/pci_dpdk.o 00:04:09.060 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:09.060 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:09.060 LIB libspdk_vmd.a 00:04:09.060 SO libspdk_vmd.so.6.0 00:04:09.060 LIB libspdk_idxd.a 00:04:09.060 CC lib/jsonrpc/jsonrpc_server.o 00:04:09.060 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:09.060 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:09.060 CC lib/jsonrpc/jsonrpc_client.o 00:04:09.060 SO libspdk_idxd.so.12.1 00:04:09.060 SYMLINK libspdk_vmd.so 00:04:09.060 SYMLINK libspdk_idxd.so 00:04:09.317 LIB libspdk_jsonrpc.a 00:04:09.317 SO libspdk_jsonrpc.so.6.0 00:04:09.317 SYMLINK libspdk_jsonrpc.so 00:04:09.575 CC lib/rpc/rpc.o 00:04:09.835 LIB libspdk_env_dpdk.a 00:04:09.835 SO libspdk_env_dpdk.so.15.0 00:04:09.835 LIB libspdk_rpc.a 00:04:09.835 SO libspdk_rpc.so.6.0 00:04:09.835 SYMLINK libspdk_env_dpdk.so 00:04:09.835 SYMLINK libspdk_rpc.so 00:04:10.093 CC lib/notify/notify.o 00:04:10.093 CC lib/notify/notify_rpc.o 00:04:10.093 CC lib/keyring/keyring_rpc.o 00:04:10.093 CC lib/keyring/keyring.o 00:04:10.093 CC lib/trace/trace.o 00:04:10.093 CC lib/trace/trace_rpc.o 00:04:10.093 CC lib/trace/trace_flags.o 00:04:10.093 LIB libspdk_notify.a 00:04:10.350 SO libspdk_notify.so.6.0 00:04:10.350 LIB libspdk_keyring.a 00:04:10.350 SYMLINK libspdk_notify.so 00:04:10.350 LIB libspdk_trace.a 00:04:10.350 SO libspdk_keyring.so.2.0 00:04:10.350 SO libspdk_trace.so.11.0 00:04:10.350 SYMLINK libspdk_keyring.so 00:04:10.350 SYMLINK libspdk_trace.so 00:04:10.608 CC lib/sock/sock.o 00:04:10.608 CC lib/sock/sock_rpc.o 00:04:10.608 CC lib/thread/thread.o 00:04:10.608 CC lib/thread/iobuf.o 00:04:10.866 LIB libspdk_sock.a 00:04:11.123 SO libspdk_sock.so.10.0 00:04:11.123 SYMLINK libspdk_sock.so 00:04:11.381 CC lib/nvme/nvme_fabric.o 00:04:11.381 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:11.381 CC lib/nvme/nvme_ctrlr.o 00:04:11.381 CC lib/nvme/nvme_ns_cmd.o 00:04:11.381 CC lib/nvme/nvme_ns.o 00:04:11.381 CC lib/nvme/nvme_qpair.o 00:04:11.381 CC lib/nvme/nvme_pcie_common.o 00:04:11.381 CC lib/nvme/nvme_pcie.o 00:04:11.381 CC lib/nvme/nvme.o 00:04:11.946 CC lib/nvme/nvme_quirks.o 00:04:11.946 CC lib/nvme/nvme_transport.o 00:04:11.946 CC lib/nvme/nvme_discovery.o 00:04:11.946 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:12.204 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:12.204 CC lib/nvme/nvme_tcp.o 00:04:12.204 LIB libspdk_thread.a 00:04:12.204 CC lib/nvme/nvme_opal.o 00:04:12.204 CC lib/nvme/nvme_io_msg.o 00:04:12.204 SO libspdk_thread.so.10.1 00:04:12.204 SYMLINK libspdk_thread.so 00:04:12.204 CC lib/nvme/nvme_poll_group.o 00:04:12.204 CC lib/nvme/nvme_zns.o 00:04:12.462 CC lib/nvme/nvme_stubs.o 00:04:12.462 CC lib/accel/accel.o 00:04:12.462 CC lib/accel/accel_rpc.o 00:04:12.462 CC lib/accel/accel_sw.o 00:04:12.720 CC lib/nvme/nvme_auth.o 00:04:12.720 CC lib/nvme/nvme_cuse.o 00:04:12.720 CC lib/nvme/nvme_rdma.o 00:04:12.978 CC lib/blob/blobstore.o 00:04:12.978 CC lib/init/json_config.o 00:04:12.978 CC lib/virtio/virtio.o 00:04:12.978 CC lib/fsdev/fsdev.o 00:04:13.236 CC lib/init/subsystem.o 00:04:13.236 CC lib/virtio/virtio_vhost_user.o 00:04:13.236 CC lib/init/subsystem_rpc.o 00:04:13.494 CC lib/virtio/virtio_vfio_user.o 00:04:13.494 CC lib/virtio/virtio_pci.o 00:04:13.494 CC lib/init/rpc.o 00:04:13.494 CC lib/fsdev/fsdev_io.o 00:04:13.494 CC lib/fsdev/fsdev_rpc.o 00:04:13.494 LIB libspdk_init.a 00:04:13.494 CC lib/blob/request.o 00:04:13.494 CC lib/blob/zeroes.o 00:04:13.752 SO libspdk_init.so.6.0 00:04:13.752 CC lib/blob/blob_bs_dev.o 00:04:13.752 LIB libspdk_accel.a 00:04:13.752 SYMLINK libspdk_init.so 00:04:13.752 LIB libspdk_virtio.a 00:04:13.752 SO libspdk_accel.so.16.0 00:04:13.752 SO libspdk_virtio.so.7.0 00:04:13.752 SYMLINK libspdk_virtio.so 00:04:13.752 SYMLINK libspdk_accel.so 00:04:13.752 CC lib/event/app.o 00:04:13.752 CC lib/event/reactor.o 00:04:13.752 CC lib/event/app_rpc.o 00:04:13.752 CC lib/event/log_rpc.o 00:04:13.752 LIB libspdk_fsdev.a 00:04:13.752 SO libspdk_fsdev.so.1.0 00:04:13.752 CC lib/event/scheduler_static.o 00:04:14.037 CC lib/bdev/bdev.o 00:04:14.037 CC lib/bdev/bdev_rpc.o 00:04:14.037 SYMLINK libspdk_fsdev.so 00:04:14.037 CC lib/bdev/bdev_zone.o 00:04:14.037 CC lib/bdev/part.o 00:04:14.037 LIB libspdk_nvme.a 00:04:14.037 CC lib/bdev/scsi_nvme.o 00:04:14.037 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:14.320 SO libspdk_nvme.so.14.0 00:04:14.320 LIB libspdk_event.a 00:04:14.320 SYMLINK libspdk_nvme.so 00:04:14.320 SO libspdk_event.so.14.0 00:04:14.320 SYMLINK libspdk_event.so 00:04:14.889 LIB libspdk_fuse_dispatcher.a 00:04:14.889 SO libspdk_fuse_dispatcher.so.1.0 00:04:14.889 SYMLINK libspdk_fuse_dispatcher.so 00:04:16.261 LIB libspdk_blob.a 00:04:16.261 SO libspdk_blob.so.11.0 00:04:16.261 SYMLINK libspdk_blob.so 00:04:16.520 CC lib/blobfs/blobfs.o 00:04:16.520 CC lib/blobfs/tree.o 00:04:16.520 CC lib/lvol/lvol.o 00:04:16.520 LIB libspdk_bdev.a 00:04:16.520 SO libspdk_bdev.so.16.0 00:04:16.778 SYMLINK libspdk_bdev.so 00:04:16.778 CC lib/scsi/dev.o 00:04:16.778 CC lib/scsi/scsi.o 00:04:16.778 CC lib/ftl/ftl_core.o 00:04:16.778 CC lib/scsi/port.o 00:04:16.778 CC lib/scsi/lun.o 00:04:16.778 CC lib/ublk/ublk.o 00:04:16.778 CC lib/nbd/nbd.o 00:04:16.778 CC lib/nvmf/ctrlr.o 00:04:17.036 CC lib/nbd/nbd_rpc.o 00:04:17.036 CC lib/nvmf/ctrlr_discovery.o 00:04:17.036 CC lib/nvmf/ctrlr_bdev.o 00:04:17.036 CC lib/scsi/scsi_bdev.o 00:04:17.036 CC lib/nvmf/subsystem.o 00:04:17.036 CC lib/ftl/ftl_init.o 00:04:17.294 LIB libspdk_nbd.a 00:04:17.294 SO libspdk_nbd.so.7.0 00:04:17.294 LIB libspdk_lvol.a 00:04:17.294 SO libspdk_lvol.so.10.0 00:04:17.294 SYMLINK libspdk_nbd.so 00:04:17.294 CC lib/scsi/scsi_pr.o 00:04:17.294 LIB libspdk_blobfs.a 00:04:17.294 CC lib/ftl/ftl_layout.o 00:04:17.294 SYMLINK libspdk_lvol.so 00:04:17.294 CC lib/scsi/scsi_rpc.o 00:04:17.294 SO libspdk_blobfs.so.10.0 00:04:17.294 CC lib/scsi/task.o 00:04:17.294 SYMLINK libspdk_blobfs.so 00:04:17.294 CC lib/ublk/ublk_rpc.o 00:04:17.294 CC lib/nvmf/nvmf.o 00:04:17.553 CC lib/nvmf/nvmf_rpc.o 00:04:17.553 CC lib/nvmf/transport.o 00:04:17.553 LIB libspdk_ublk.a 00:04:17.553 CC lib/ftl/ftl_debug.o 00:04:17.553 SO libspdk_ublk.so.3.0 00:04:17.553 LIB libspdk_scsi.a 00:04:17.553 SYMLINK libspdk_ublk.so 00:04:17.553 CC lib/ftl/ftl_io.o 00:04:17.553 SO libspdk_scsi.so.9.0 00:04:17.553 CC lib/ftl/ftl_sb.o 00:04:17.812 CC lib/nvmf/tcp.o 00:04:17.812 CC lib/nvmf/stubs.o 00:04:17.812 SYMLINK libspdk_scsi.so 00:04:17.812 CC lib/ftl/ftl_l2p.o 00:04:17.812 CC lib/ftl/ftl_l2p_flat.o 00:04:17.812 CC lib/ftl/ftl_nv_cache.o 00:04:18.070 CC lib/ftl/ftl_band.o 00:04:18.070 CC lib/ftl/ftl_band_ops.o 00:04:18.070 CC lib/ftl/ftl_writer.o 00:04:18.070 CC lib/nvmf/mdns_server.o 00:04:18.070 CC lib/iscsi/conn.o 00:04:18.328 CC lib/iscsi/init_grp.o 00:04:18.328 CC lib/ftl/ftl_rq.o 00:04:18.328 CC lib/iscsi/iscsi.o 00:04:18.328 CC lib/ftl/ftl_reloc.o 00:04:18.328 CC lib/ftl/ftl_l2p_cache.o 00:04:18.586 CC lib/ftl/ftl_p2l.o 00:04:18.586 CC lib/vhost/vhost.o 00:04:18.586 CC lib/ftl/ftl_p2l_log.o 00:04:18.586 CC lib/ftl/mngt/ftl_mngt.o 00:04:18.586 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:18.586 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:18.845 CC lib/iscsi/param.o 00:04:18.845 CC lib/vhost/vhost_rpc.o 00:04:18.845 CC lib/iscsi/portal_grp.o 00:04:18.845 CC lib/iscsi/tgt_node.o 00:04:18.845 CC lib/iscsi/iscsi_subsystem.o 00:04:18.845 CC lib/iscsi/iscsi_rpc.o 00:04:18.845 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:19.102 CC lib/iscsi/task.o 00:04:19.102 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:19.102 CC lib/nvmf/rdma.o 00:04:19.102 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:19.102 CC lib/nvmf/auth.o 00:04:19.102 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:19.102 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:19.361 CC lib/vhost/vhost_scsi.o 00:04:19.361 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:19.361 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:19.361 CC lib/vhost/vhost_blk.o 00:04:19.361 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:19.361 CC lib/vhost/rte_vhost_user.o 00:04:19.361 LIB libspdk_iscsi.a 00:04:19.361 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:19.361 SO libspdk_iscsi.so.8.0 00:04:19.361 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:19.619 CC lib/ftl/utils/ftl_conf.o 00:04:19.619 CC lib/ftl/utils/ftl_md.o 00:04:19.619 SYMLINK libspdk_iscsi.so 00:04:19.619 CC lib/ftl/utils/ftl_mempool.o 00:04:19.619 CC lib/ftl/utils/ftl_bitmap.o 00:04:19.619 CC lib/ftl/utils/ftl_property.o 00:04:19.619 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:19.619 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:19.619 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:19.877 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:19.877 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:19.877 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:19.877 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:19.877 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:19.877 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:19.877 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:19.877 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:19.877 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:19.877 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:19.877 CC lib/ftl/base/ftl_base_dev.o 00:04:20.135 CC lib/ftl/base/ftl_base_bdev.o 00:04:20.135 CC lib/ftl/ftl_trace.o 00:04:20.135 LIB libspdk_ftl.a 00:04:20.135 LIB libspdk_vhost.a 00:04:20.393 SO libspdk_vhost.so.8.0 00:04:20.393 SYMLINK libspdk_vhost.so 00:04:20.393 SO libspdk_ftl.so.9.0 00:04:20.650 SYMLINK libspdk_ftl.so 00:04:21.217 LIB libspdk_nvmf.a 00:04:21.217 SO libspdk_nvmf.so.19.0 00:04:21.475 SYMLINK libspdk_nvmf.so 00:04:21.734 CC module/env_dpdk/env_dpdk_rpc.o 00:04:21.734 CC module/keyring/file/keyring.o 00:04:21.734 CC module/blob/bdev/blob_bdev.o 00:04:21.734 CC module/fsdev/aio/fsdev_aio.o 00:04:21.734 CC module/keyring/linux/keyring.o 00:04:21.734 CC module/sock/posix/posix.o 00:04:21.734 CC module/accel/error/accel_error.o 00:04:21.734 CC module/accel/dsa/accel_dsa.o 00:04:21.734 CC module/accel/ioat/accel_ioat.o 00:04:21.734 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:21.734 LIB libspdk_env_dpdk_rpc.a 00:04:21.734 SO libspdk_env_dpdk_rpc.so.6.0 00:04:21.734 CC module/keyring/linux/keyring_rpc.o 00:04:21.734 CC module/keyring/file/keyring_rpc.o 00:04:21.734 SYMLINK libspdk_env_dpdk_rpc.so 00:04:21.734 CC module/accel/error/accel_error_rpc.o 00:04:21.992 CC module/accel/ioat/accel_ioat_rpc.o 00:04:21.992 CC module/accel/dsa/accel_dsa_rpc.o 00:04:21.992 LIB libspdk_keyring_linux.a 00:04:21.992 LIB libspdk_keyring_file.a 00:04:21.992 LIB libspdk_scheduler_dynamic.a 00:04:21.992 SO libspdk_keyring_file.so.2.0 00:04:21.992 SO libspdk_keyring_linux.so.1.0 00:04:21.992 SO libspdk_scheduler_dynamic.so.4.0 00:04:21.992 LIB libspdk_accel_ioat.a 00:04:21.992 LIB libspdk_accel_error.a 00:04:21.992 SYMLINK libspdk_keyring_file.so 00:04:21.992 LIB libspdk_blob_bdev.a 00:04:21.992 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:21.992 SO libspdk_accel_error.so.2.0 00:04:21.992 CC module/fsdev/aio/linux_aio_mgr.o 00:04:21.992 LIB libspdk_accel_dsa.a 00:04:21.992 SYMLINK libspdk_keyring_linux.so 00:04:21.992 SYMLINK libspdk_scheduler_dynamic.so 00:04:21.992 SO libspdk_accel_ioat.so.6.0 00:04:21.992 SO libspdk_blob_bdev.so.11.0 00:04:21.992 SO libspdk_accel_dsa.so.5.0 00:04:21.992 SYMLINK libspdk_accel_ioat.so 00:04:21.993 SYMLINK libspdk_accel_error.so 00:04:21.993 SYMLINK libspdk_accel_dsa.so 00:04:21.993 SYMLINK libspdk_blob_bdev.so 00:04:22.271 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:22.271 CC module/accel/iaa/accel_iaa.o 00:04:22.271 CC module/accel/iaa/accel_iaa_rpc.o 00:04:22.271 CC module/scheduler/gscheduler/gscheduler.o 00:04:22.271 CC module/bdev/delay/vbdev_delay.o 00:04:22.271 LIB libspdk_scheduler_dpdk_governor.a 00:04:22.271 CC module/blobfs/bdev/blobfs_bdev.o 00:04:22.271 CC module/bdev/error/vbdev_error.o 00:04:22.271 CC module/bdev/gpt/gpt.o 00:04:22.271 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:22.271 LIB libspdk_accel_iaa.a 00:04:22.271 SO libspdk_accel_iaa.so.3.0 00:04:22.271 LIB libspdk_scheduler_gscheduler.a 00:04:22.271 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:22.580 CC module/bdev/error/vbdev_error_rpc.o 00:04:22.580 SO libspdk_scheduler_gscheduler.so.4.0 00:04:22.580 CC module/bdev/lvol/vbdev_lvol.o 00:04:22.580 LIB libspdk_fsdev_aio.a 00:04:22.580 SYMLINK libspdk_accel_iaa.so 00:04:22.580 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:22.580 SO libspdk_fsdev_aio.so.1.0 00:04:22.580 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:22.580 SYMLINK libspdk_scheduler_gscheduler.so 00:04:22.580 CC module/bdev/gpt/vbdev_gpt.o 00:04:22.580 SYMLINK libspdk_fsdev_aio.so 00:04:22.580 LIB libspdk_sock_posix.a 00:04:22.580 LIB libspdk_bdev_error.a 00:04:22.580 SO libspdk_sock_posix.so.6.0 00:04:22.580 SO libspdk_bdev_error.so.6.0 00:04:22.580 LIB libspdk_blobfs_bdev.a 00:04:22.580 CC module/bdev/malloc/bdev_malloc.o 00:04:22.580 SO libspdk_blobfs_bdev.so.6.0 00:04:22.580 LIB libspdk_bdev_delay.a 00:04:22.580 SYMLINK libspdk_sock_posix.so 00:04:22.580 SYMLINK libspdk_bdev_error.so 00:04:22.580 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:22.580 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:22.580 CC module/bdev/null/bdev_null.o 00:04:22.580 SO libspdk_bdev_delay.so.6.0 00:04:22.580 CC module/bdev/nvme/bdev_nvme.o 00:04:22.580 SYMLINK libspdk_blobfs_bdev.so 00:04:22.580 CC module/bdev/null/bdev_null_rpc.o 00:04:22.580 SYMLINK libspdk_bdev_delay.so 00:04:22.916 CC module/bdev/passthru/vbdev_passthru.o 00:04:22.916 LIB libspdk_bdev_gpt.a 00:04:22.916 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:22.916 SO libspdk_bdev_gpt.so.6.0 00:04:22.916 CC module/bdev/raid/bdev_raid.o 00:04:22.916 CC module/bdev/raid/bdev_raid_rpc.o 00:04:22.916 SYMLINK libspdk_bdev_gpt.so 00:04:22.916 LIB libspdk_bdev_null.a 00:04:22.916 LIB libspdk_bdev_lvol.a 00:04:22.916 SO libspdk_bdev_null.so.6.0 00:04:22.916 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:22.916 SO libspdk_bdev_lvol.so.6.0 00:04:22.916 LIB libspdk_bdev_malloc.a 00:04:22.916 LIB libspdk_bdev_passthru.a 00:04:22.916 CC module/bdev/split/vbdev_split.o 00:04:22.916 SYMLINK libspdk_bdev_null.so 00:04:22.916 SO libspdk_bdev_malloc.so.6.0 00:04:22.916 SO libspdk_bdev_passthru.so.6.0 00:04:22.916 SYMLINK libspdk_bdev_lvol.so 00:04:22.916 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:22.916 SYMLINK libspdk_bdev_malloc.so 00:04:22.916 SYMLINK libspdk_bdev_passthru.so 00:04:22.916 CC module/bdev/raid/bdev_raid_sb.o 00:04:22.916 CC module/bdev/raid/raid0.o 00:04:23.175 CC module/bdev/xnvme/bdev_xnvme.o 00:04:23.175 CC module/bdev/aio/bdev_aio.o 00:04:23.175 CC module/bdev/ftl/bdev_ftl.o 00:04:23.175 CC module/bdev/split/vbdev_split_rpc.o 00:04:23.175 CC module/bdev/aio/bdev_aio_rpc.o 00:04:23.433 CC module/bdev/nvme/nvme_rpc.o 00:04:23.433 LIB libspdk_bdev_split.a 00:04:23.433 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:23.433 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:23.433 SO libspdk_bdev_split.so.6.0 00:04:23.433 CC module/bdev/nvme/bdev_mdns_client.o 00:04:23.433 SYMLINK libspdk_bdev_split.so 00:04:23.433 CC module/bdev/nvme/vbdev_opal.o 00:04:23.433 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:23.433 LIB libspdk_bdev_aio.a 00:04:23.433 LIB libspdk_bdev_zone_block.a 00:04:23.433 SO libspdk_bdev_aio.so.6.0 00:04:23.433 LIB libspdk_bdev_xnvme.a 00:04:23.433 SO libspdk_bdev_zone_block.so.6.0 00:04:23.691 SO libspdk_bdev_xnvme.so.3.0 00:04:23.691 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:23.691 SYMLINK libspdk_bdev_aio.so 00:04:23.691 CC module/bdev/raid/raid1.o 00:04:23.691 CC module/bdev/raid/concat.o 00:04:23.691 SYMLINK libspdk_bdev_zone_block.so 00:04:23.691 SYMLINK libspdk_bdev_xnvme.so 00:04:23.691 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:23.691 LIB libspdk_bdev_ftl.a 00:04:23.691 SO libspdk_bdev_ftl.so.6.0 00:04:23.691 CC module/bdev/iscsi/bdev_iscsi.o 00:04:23.691 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:23.691 SYMLINK libspdk_bdev_ftl.so 00:04:23.691 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:23.691 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:23.691 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:23.950 LIB libspdk_bdev_raid.a 00:04:23.950 SO libspdk_bdev_raid.so.6.0 00:04:23.950 SYMLINK libspdk_bdev_raid.so 00:04:23.950 LIB libspdk_bdev_iscsi.a 00:04:23.950 SO libspdk_bdev_iscsi.so.6.0 00:04:24.207 SYMLINK libspdk_bdev_iscsi.so 00:04:24.207 LIB libspdk_bdev_virtio.a 00:04:24.207 SO libspdk_bdev_virtio.so.6.0 00:04:24.465 SYMLINK libspdk_bdev_virtio.so 00:04:24.723 LIB libspdk_bdev_nvme.a 00:04:24.723 SO libspdk_bdev_nvme.so.7.0 00:04:24.981 SYMLINK libspdk_bdev_nvme.so 00:04:25.240 CC module/event/subsystems/scheduler/scheduler.o 00:04:25.240 CC module/event/subsystems/sock/sock.o 00:04:25.240 CC module/event/subsystems/iobuf/iobuf.o 00:04:25.240 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:25.240 CC module/event/subsystems/keyring/keyring.o 00:04:25.240 CC module/event/subsystems/vmd/vmd.o 00:04:25.240 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:25.240 CC module/event/subsystems/fsdev/fsdev.o 00:04:25.241 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:25.501 LIB libspdk_event_keyring.a 00:04:25.501 LIB libspdk_event_scheduler.a 00:04:25.501 LIB libspdk_event_vmd.a 00:04:25.501 LIB libspdk_event_fsdev.a 00:04:25.501 SO libspdk_event_keyring.so.1.0 00:04:25.501 LIB libspdk_event_sock.a 00:04:25.501 SO libspdk_event_scheduler.so.4.0 00:04:25.501 LIB libspdk_event_iobuf.a 00:04:25.501 LIB libspdk_event_vhost_blk.a 00:04:25.501 SO libspdk_event_sock.so.5.0 00:04:25.501 SO libspdk_event_fsdev.so.1.0 00:04:25.501 SO libspdk_event_vmd.so.6.0 00:04:25.501 SO libspdk_event_vhost_blk.so.3.0 00:04:25.501 SO libspdk_event_iobuf.so.3.0 00:04:25.501 SYMLINK libspdk_event_keyring.so 00:04:25.501 SYMLINK libspdk_event_scheduler.so 00:04:25.501 SYMLINK libspdk_event_fsdev.so 00:04:25.501 SYMLINK libspdk_event_sock.so 00:04:25.501 SYMLINK libspdk_event_vhost_blk.so 00:04:25.501 SYMLINK libspdk_event_vmd.so 00:04:25.501 SYMLINK libspdk_event_iobuf.so 00:04:25.759 CC module/event/subsystems/accel/accel.o 00:04:25.759 LIB libspdk_event_accel.a 00:04:25.759 SO libspdk_event_accel.so.6.0 00:04:26.017 SYMLINK libspdk_event_accel.so 00:04:26.276 CC module/event/subsystems/bdev/bdev.o 00:04:26.276 LIB libspdk_event_bdev.a 00:04:26.276 SO libspdk_event_bdev.so.6.0 00:04:26.276 SYMLINK libspdk_event_bdev.so 00:04:26.534 CC module/event/subsystems/ublk/ublk.o 00:04:26.534 CC module/event/subsystems/nbd/nbd.o 00:04:26.534 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:26.534 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:26.534 CC module/event/subsystems/scsi/scsi.o 00:04:26.792 LIB libspdk_event_ublk.a 00:04:26.792 LIB libspdk_event_nbd.a 00:04:26.792 SO libspdk_event_ublk.so.3.0 00:04:26.792 SO libspdk_event_nbd.so.6.0 00:04:26.792 LIB libspdk_event_scsi.a 00:04:26.792 SO libspdk_event_scsi.so.6.0 00:04:26.792 SYMLINK libspdk_event_nbd.so 00:04:26.792 LIB libspdk_event_nvmf.a 00:04:26.792 SYMLINK libspdk_event_ublk.so 00:04:26.792 SO libspdk_event_nvmf.so.6.0 00:04:26.792 SYMLINK libspdk_event_scsi.so 00:04:26.792 SYMLINK libspdk_event_nvmf.so 00:04:27.051 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:27.051 CC module/event/subsystems/iscsi/iscsi.o 00:04:27.051 LIB libspdk_event_vhost_scsi.a 00:04:27.051 SO libspdk_event_vhost_scsi.so.3.0 00:04:27.051 LIB libspdk_event_iscsi.a 00:04:27.051 SO libspdk_event_iscsi.so.6.0 00:04:27.051 SYMLINK libspdk_event_vhost_scsi.so 00:04:27.309 SYMLINK libspdk_event_iscsi.so 00:04:27.309 SO libspdk.so.6.0 00:04:27.309 SYMLINK libspdk.so 00:04:27.568 TEST_HEADER include/spdk/accel.h 00:04:27.568 TEST_HEADER include/spdk/accel_module.h 00:04:27.568 CXX app/trace/trace.o 00:04:27.568 TEST_HEADER include/spdk/assert.h 00:04:27.568 TEST_HEADER include/spdk/barrier.h 00:04:27.568 TEST_HEADER include/spdk/base64.h 00:04:27.568 TEST_HEADER include/spdk/bdev.h 00:04:27.568 CC app/trace_record/trace_record.o 00:04:27.568 TEST_HEADER include/spdk/bdev_module.h 00:04:27.568 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.568 TEST_HEADER include/spdk/bit_array.h 00:04:27.568 TEST_HEADER include/spdk/bit_pool.h 00:04:27.568 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.568 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.568 TEST_HEADER include/spdk/blobfs.h 00:04:27.568 TEST_HEADER include/spdk/blob.h 00:04:27.568 TEST_HEADER include/spdk/conf.h 00:04:27.568 TEST_HEADER include/spdk/config.h 00:04:27.568 TEST_HEADER include/spdk/cpuset.h 00:04:27.568 TEST_HEADER include/spdk/crc16.h 00:04:27.568 TEST_HEADER include/spdk/crc32.h 00:04:27.568 TEST_HEADER include/spdk/crc64.h 00:04:27.568 TEST_HEADER include/spdk/dif.h 00:04:27.568 TEST_HEADER include/spdk/dma.h 00:04:27.568 TEST_HEADER include/spdk/endian.h 00:04:27.568 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.568 CC app/nvmf_tgt/nvmf_main.o 00:04:27.568 CC app/iscsi_tgt/iscsi_tgt.o 00:04:27.568 TEST_HEADER include/spdk/env.h 00:04:27.568 TEST_HEADER include/spdk/event.h 00:04:27.568 TEST_HEADER include/spdk/fd_group.h 00:04:27.568 TEST_HEADER include/spdk/fd.h 00:04:27.568 TEST_HEADER include/spdk/file.h 00:04:27.568 TEST_HEADER include/spdk/fsdev.h 00:04:27.568 TEST_HEADER include/spdk/fsdev_module.h 00:04:27.568 TEST_HEADER include/spdk/ftl.h 00:04:27.568 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:27.568 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.568 TEST_HEADER include/spdk/hexlify.h 00:04:27.568 TEST_HEADER include/spdk/histogram_data.h 00:04:27.568 TEST_HEADER include/spdk/idxd.h 00:04:27.568 CC app/spdk_tgt/spdk_tgt.o 00:04:27.568 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.568 TEST_HEADER include/spdk/init.h 00:04:27.568 TEST_HEADER include/spdk/ioat.h 00:04:27.568 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.568 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.568 CC examples/util/zipf/zipf.o 00:04:27.568 CC test/thread/poller_perf/poller_perf.o 00:04:27.568 TEST_HEADER include/spdk/json.h 00:04:27.568 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.568 TEST_HEADER include/spdk/keyring.h 00:04:27.568 TEST_HEADER include/spdk/keyring_module.h 00:04:27.568 TEST_HEADER include/spdk/likely.h 00:04:27.568 TEST_HEADER include/spdk/log.h 00:04:27.568 TEST_HEADER include/spdk/lvol.h 00:04:27.568 TEST_HEADER include/spdk/md5.h 00:04:27.568 TEST_HEADER include/spdk/memory.h 00:04:27.568 TEST_HEADER include/spdk/mmio.h 00:04:27.568 TEST_HEADER include/spdk/nbd.h 00:04:27.568 TEST_HEADER include/spdk/net.h 00:04:27.568 TEST_HEADER include/spdk/notify.h 00:04:27.568 TEST_HEADER include/spdk/nvme.h 00:04:27.568 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.568 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.568 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.568 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.568 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.568 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.568 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.568 TEST_HEADER include/spdk/nvmf.h 00:04:27.568 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.568 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.568 CC test/app/bdev_svc/bdev_svc.o 00:04:27.568 TEST_HEADER include/spdk/opal.h 00:04:27.568 CC test/dma/test_dma/test_dma.o 00:04:27.568 TEST_HEADER include/spdk/opal_spec.h 00:04:27.568 TEST_HEADER include/spdk/pci_ids.h 00:04:27.568 TEST_HEADER include/spdk/pipe.h 00:04:27.568 TEST_HEADER include/spdk/queue.h 00:04:27.568 TEST_HEADER include/spdk/reduce.h 00:04:27.568 TEST_HEADER include/spdk/rpc.h 00:04:27.568 TEST_HEADER include/spdk/scheduler.h 00:04:27.568 TEST_HEADER include/spdk/scsi.h 00:04:27.568 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.568 TEST_HEADER include/spdk/sock.h 00:04:27.568 TEST_HEADER include/spdk/stdinc.h 00:04:27.568 TEST_HEADER include/spdk/string.h 00:04:27.568 TEST_HEADER include/spdk/thread.h 00:04:27.568 TEST_HEADER include/spdk/trace.h 00:04:27.568 TEST_HEADER include/spdk/trace_parser.h 00:04:27.568 TEST_HEADER include/spdk/tree.h 00:04:27.568 TEST_HEADER include/spdk/ublk.h 00:04:27.569 TEST_HEADER include/spdk/util.h 00:04:27.569 TEST_HEADER include/spdk/uuid.h 00:04:27.569 TEST_HEADER include/spdk/version.h 00:04:27.569 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.569 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.569 TEST_HEADER include/spdk/vhost.h 00:04:27.569 TEST_HEADER include/spdk/vmd.h 00:04:27.569 TEST_HEADER include/spdk/xor.h 00:04:27.569 TEST_HEADER include/spdk/zipf.h 00:04:27.569 CXX test/cpp_headers/accel.o 00:04:27.827 LINK nvmf_tgt 00:04:27.827 LINK poller_perf 00:04:27.827 LINK zipf 00:04:27.827 LINK iscsi_tgt 00:04:27.827 LINK spdk_trace_record 00:04:27.827 LINK bdev_svc 00:04:27.827 LINK spdk_tgt 00:04:27.827 CXX test/cpp_headers/accel_module.o 00:04:27.827 CXX test/cpp_headers/assert.o 00:04:27.827 LINK spdk_trace 00:04:28.085 CC examples/ioat/perf/perf.o 00:04:28.085 CXX test/cpp_headers/barrier.o 00:04:28.085 CC examples/vmd/lsvmd/lsvmd.o 00:04:28.085 CC examples/vmd/led/led.o 00:04:28.085 CC test/event/event_perf/event_perf.o 00:04:28.085 CC app/spdk_lspci/spdk_lspci.o 00:04:28.085 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:28.085 CC test/env/mem_callbacks/mem_callbacks.o 00:04:28.085 LINK test_dma 00:04:28.085 CXX test/cpp_headers/base64.o 00:04:28.085 CC examples/idxd/perf/perf.o 00:04:28.085 LINK led 00:04:28.085 LINK spdk_lspci 00:04:28.085 LINK lsvmd 00:04:28.344 LINK ioat_perf 00:04:28.344 LINK event_perf 00:04:28.344 CXX test/cpp_headers/bdev.o 00:04:28.344 LINK mem_callbacks 00:04:28.344 CXX test/cpp_headers/bdev_module.o 00:04:28.344 CC app/spdk_nvme_perf/perf.o 00:04:28.344 CC examples/ioat/verify/verify.o 00:04:28.344 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:28.344 CC test/env/vtophys/vtophys.o 00:04:28.344 CC test/event/reactor/reactor.o 00:04:28.344 LINK idxd_perf 00:04:28.603 LINK nvme_fuzz 00:04:28.603 CC examples/thread/thread/thread_ex.o 00:04:28.603 CXX test/cpp_headers/bdev_zone.o 00:04:28.603 CC test/app/histogram_perf/histogram_perf.o 00:04:28.603 CXX test/cpp_headers/bit_array.o 00:04:28.603 LINK interrupt_tgt 00:04:28.603 LINK reactor 00:04:28.603 LINK vtophys 00:04:28.603 LINK verify 00:04:28.603 LINK histogram_perf 00:04:28.603 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:28.862 CXX test/cpp_headers/bit_pool.o 00:04:28.862 LINK thread 00:04:28.862 CC test/event/reactor_perf/reactor_perf.o 00:04:28.862 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:28.862 CC test/app/jsoncat/jsoncat.o 00:04:28.862 CC test/app/stub/stub.o 00:04:28.862 CC examples/sock/hello_world/hello_sock.o 00:04:28.862 CC app/spdk_nvme_identify/identify.o 00:04:28.862 CXX test/cpp_headers/blob_bdev.o 00:04:28.862 LINK reactor_perf 00:04:28.862 LINK jsoncat 00:04:28.862 LINK env_dpdk_post_init 00:04:29.120 LINK stub 00:04:29.120 CC test/event/app_repeat/app_repeat.o 00:04:29.120 CXX test/cpp_headers/blobfs_bdev.o 00:04:29.120 LINK spdk_nvme_perf 00:04:29.120 LINK hello_sock 00:04:29.120 CC test/rpc_client/rpc_client_test.o 00:04:29.120 CC test/env/memory/memory_ut.o 00:04:29.120 LINK app_repeat 00:04:29.120 CXX test/cpp_headers/blobfs.o 00:04:29.120 CC examples/accel/perf/accel_perf.o 00:04:29.378 CC test/accel/dif/dif.o 00:04:29.378 LINK rpc_client_test 00:04:29.378 CXX test/cpp_headers/blob.o 00:04:29.378 CC test/blobfs/mkfs/mkfs.o 00:04:29.378 CXX test/cpp_headers/conf.o 00:04:29.378 CC test/event/scheduler/scheduler.o 00:04:29.379 CC test/lvol/esnap/esnap.o 00:04:29.637 CXX test/cpp_headers/config.o 00:04:29.637 LINK mkfs 00:04:29.637 CXX test/cpp_headers/cpuset.o 00:04:29.637 CC test/nvme/aer/aer.o 00:04:29.637 LINK scheduler 00:04:29.637 LINK spdk_nvme_identify 00:04:29.637 LINK accel_perf 00:04:29.637 CXX test/cpp_headers/crc16.o 00:04:29.896 CC test/nvme/reset/reset.o 00:04:29.896 CXX test/cpp_headers/crc32.o 00:04:29.896 LINK dif 00:04:29.896 CC app/spdk_nvme_discover/discovery_aer.o 00:04:29.896 LINK aer 00:04:29.896 LINK memory_ut 00:04:29.896 CC test/env/pci/pci_ut.o 00:04:29.896 CXX test/cpp_headers/crc64.o 00:04:29.896 LINK reset 00:04:30.154 CC examples/blob/hello_world/hello_blob.o 00:04:30.154 CXX test/cpp_headers/dif.o 00:04:30.154 CXX test/cpp_headers/dma.o 00:04:30.154 CXX test/cpp_headers/endian.o 00:04:30.154 LINK spdk_nvme_discover 00:04:30.154 CC test/nvme/sgl/sgl.o 00:04:30.154 CXX test/cpp_headers/env_dpdk.o 00:04:30.154 CC app/spdk_top/spdk_top.o 00:04:30.154 CC test/bdev/bdevio/bdevio.o 00:04:30.154 LINK hello_blob 00:04:30.154 CC test/nvme/e2edp/nvme_dp.o 00:04:30.411 CC test/nvme/overhead/overhead.o 00:04:30.411 LINK pci_ut 00:04:30.411 CXX test/cpp_headers/env.o 00:04:30.411 LINK sgl 00:04:30.411 LINK iscsi_fuzz 00:04:30.411 CXX test/cpp_headers/event.o 00:04:30.411 CC examples/blob/cli/blobcli.o 00:04:30.669 LINK nvme_dp 00:04:30.669 CC test/nvme/err_injection/err_injection.o 00:04:30.669 LINK overhead 00:04:30.669 CXX test/cpp_headers/fd_group.o 00:04:30.669 LINK bdevio 00:04:30.669 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:30.669 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:30.669 CC test/nvme/startup/startup.o 00:04:30.669 LINK err_injection 00:04:30.669 CC test/nvme/reserve/reserve.o 00:04:30.669 CXX test/cpp_headers/fd.o 00:04:30.928 CXX test/cpp_headers/file.o 00:04:30.928 CXX test/cpp_headers/fsdev.o 00:04:30.928 CXX test/cpp_headers/fsdev_module.o 00:04:30.928 LINK startup 00:04:30.928 LINK blobcli 00:04:30.928 CXX test/cpp_headers/ftl.o 00:04:30.928 LINK reserve 00:04:30.928 CXX test/cpp_headers/fuse_dispatcher.o 00:04:31.187 CC examples/nvme/hello_world/hello_world.o 00:04:31.187 CC app/vhost/vhost.o 00:04:31.187 CC app/spdk_dd/spdk_dd.o 00:04:31.187 CXX test/cpp_headers/gpt_spec.o 00:04:31.187 LINK vhost_fuzz 00:04:31.187 CC app/fio/nvme/fio_plugin.o 00:04:31.187 LINK spdk_top 00:04:31.187 CC test/nvme/simple_copy/simple_copy.o 00:04:31.187 CC test/nvme/connect_stress/connect_stress.o 00:04:31.187 CXX test/cpp_headers/hexlify.o 00:04:31.187 LINK hello_world 00:04:31.187 LINK vhost 00:04:31.445 CC test/nvme/boot_partition/boot_partition.o 00:04:31.446 LINK connect_stress 00:04:31.446 CXX test/cpp_headers/histogram_data.o 00:04:31.446 LINK simple_copy 00:04:31.446 LINK spdk_dd 00:04:31.446 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:31.446 CC examples/nvme/reconnect/reconnect.o 00:04:31.446 LINK boot_partition 00:04:31.446 CXX test/cpp_headers/idxd.o 00:04:31.446 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:31.446 CC examples/nvme/arbitration/arbitration.o 00:04:31.704 CC examples/nvme/hotplug/hotplug.o 00:04:31.704 CXX test/cpp_headers/idxd_spec.o 00:04:31.704 LINK hello_fsdev 00:04:31.704 CC test/nvme/compliance/nvme_compliance.o 00:04:31.704 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:31.704 LINK spdk_nvme 00:04:31.704 LINK reconnect 00:04:31.704 CXX test/cpp_headers/init.o 00:04:31.963 LINK cmb_copy 00:04:31.963 LINK hotplug 00:04:31.963 CXX test/cpp_headers/ioat.o 00:04:31.963 CC app/fio/bdev/fio_plugin.o 00:04:31.963 LINK arbitration 00:04:31.963 CXX test/cpp_headers/ioat_spec.o 00:04:31.963 CC examples/nvme/abort/abort.o 00:04:31.963 CC test/nvme/fused_ordering/fused_ordering.o 00:04:31.963 LINK nvme_compliance 00:04:31.963 LINK nvme_manage 00:04:31.963 CC test/nvme/fdp/fdp.o 00:04:31.963 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:31.963 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:32.222 CXX test/cpp_headers/iscsi_spec.o 00:04:32.222 CXX test/cpp_headers/json.o 00:04:32.222 LINK doorbell_aers 00:04:32.222 LINK fused_ordering 00:04:32.222 LINK pmr_persistence 00:04:32.222 CXX test/cpp_headers/jsonrpc.o 00:04:32.222 CC test/nvme/cuse/cuse.o 00:04:32.222 CXX test/cpp_headers/keyring.o 00:04:32.480 CXX test/cpp_headers/keyring_module.o 00:04:32.480 LINK spdk_bdev 00:04:32.480 LINK fdp 00:04:32.480 LINK abort 00:04:32.480 CXX test/cpp_headers/likely.o 00:04:32.480 CXX test/cpp_headers/log.o 00:04:32.480 CXX test/cpp_headers/lvol.o 00:04:32.480 CXX test/cpp_headers/md5.o 00:04:32.480 CXX test/cpp_headers/memory.o 00:04:32.480 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.480 CXX test/cpp_headers/mmio.o 00:04:32.480 CXX test/cpp_headers/nbd.o 00:04:32.480 CXX test/cpp_headers/net.o 00:04:32.480 CXX test/cpp_headers/notify.o 00:04:32.480 CC examples/bdev/bdevperf/bdevperf.o 00:04:32.480 CXX test/cpp_headers/nvme.o 00:04:32.738 CXX test/cpp_headers/nvme_intel.o 00:04:32.738 CXX test/cpp_headers/nvme_ocssd.o 00:04:32.738 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:32.738 CXX test/cpp_headers/nvme_spec.o 00:04:32.738 CXX test/cpp_headers/nvme_zns.o 00:04:32.738 LINK hello_bdev 00:04:32.738 CXX test/cpp_headers/nvmf_cmd.o 00:04:32.738 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:32.738 CXX test/cpp_headers/nvmf.o 00:04:32.738 CXX test/cpp_headers/nvmf_spec.o 00:04:32.738 CXX test/cpp_headers/nvmf_transport.o 00:04:32.738 CXX test/cpp_headers/opal.o 00:04:32.738 CXX test/cpp_headers/opal_spec.o 00:04:32.738 CXX test/cpp_headers/pci_ids.o 00:04:32.997 CXX test/cpp_headers/pipe.o 00:04:32.997 CXX test/cpp_headers/queue.o 00:04:32.997 CXX test/cpp_headers/reduce.o 00:04:32.997 CXX test/cpp_headers/rpc.o 00:04:32.997 CXX test/cpp_headers/scheduler.o 00:04:32.997 CXX test/cpp_headers/scsi.o 00:04:32.997 CXX test/cpp_headers/scsi_spec.o 00:04:32.997 CXX test/cpp_headers/sock.o 00:04:32.997 CXX test/cpp_headers/stdinc.o 00:04:32.997 CXX test/cpp_headers/string.o 00:04:32.997 CXX test/cpp_headers/thread.o 00:04:32.997 CXX test/cpp_headers/trace.o 00:04:32.997 CXX test/cpp_headers/trace_parser.o 00:04:32.997 CXX test/cpp_headers/tree.o 00:04:32.997 CXX test/cpp_headers/ublk.o 00:04:33.256 CXX test/cpp_headers/util.o 00:04:33.256 CXX test/cpp_headers/uuid.o 00:04:33.256 CXX test/cpp_headers/version.o 00:04:33.256 CXX test/cpp_headers/vfio_user_pci.o 00:04:33.256 CXX test/cpp_headers/vfio_user_spec.o 00:04:33.256 CXX test/cpp_headers/vhost.o 00:04:33.256 CXX test/cpp_headers/vmd.o 00:04:33.256 CXX test/cpp_headers/xor.o 00:04:33.256 CXX test/cpp_headers/zipf.o 00:04:33.256 LINK bdevperf 00:04:33.514 LINK cuse 00:04:33.772 CC examples/nvmf/nvmf/nvmf.o 00:04:34.030 LINK nvmf 00:04:34.030 LINK esnap 00:04:34.288 ************************************ 00:04:34.288 END TEST make 00:04:34.288 ************************************ 00:04:34.288 00:04:34.288 real 1m2.621s 00:04:34.288 user 5m16.683s 00:04:34.288 sys 0m51.681s 00:04:34.288 10:36:34 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:34.288 10:36:34 make -- common/autotest_common.sh@10 -- $ set +x 00:04:34.288 10:36:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:34.288 10:36:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:34.288 10:36:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:34.288 10:36:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.288 10:36:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:34.288 10:36:34 -- pm/common@44 -- $ pid=5811 00:04:34.288 10:36:34 -- pm/common@50 -- $ kill -TERM 5811 00:04:34.288 10:36:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.288 10:36:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:34.288 10:36:34 -- pm/common@44 -- $ pid=5812 00:04:34.288 10:36:34 -- pm/common@50 -- $ kill -TERM 5812 00:04:34.547 10:36:34 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:34.547 10:36:34 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:34.547 10:36:34 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:34.547 10:36:34 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:34.547 10:36:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.547 10:36:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.547 10:36:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.547 10:36:34 -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.547 10:36:34 -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.547 10:36:34 -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.547 10:36:34 -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.547 10:36:34 -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.547 10:36:34 -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.547 10:36:34 -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.547 10:36:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.547 10:36:34 -- scripts/common.sh@344 -- # case "$op" in 00:04:34.547 10:36:34 -- scripts/common.sh@345 -- # : 1 00:04:34.547 10:36:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.547 10:36:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.547 10:36:34 -- scripts/common.sh@365 -- # decimal 1 00:04:34.547 10:36:34 -- scripts/common.sh@353 -- # local d=1 00:04:34.547 10:36:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.547 10:36:34 -- scripts/common.sh@355 -- # echo 1 00:04:34.547 10:36:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.547 10:36:34 -- scripts/common.sh@366 -- # decimal 2 00:04:34.547 10:36:34 -- scripts/common.sh@353 -- # local d=2 00:04:34.547 10:36:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.547 10:36:34 -- scripts/common.sh@355 -- # echo 2 00:04:34.547 10:36:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.547 10:36:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.547 10:36:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.547 10:36:34 -- scripts/common.sh@368 -- # return 0 00:04:34.547 10:36:34 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.547 10:36:34 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:34.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.547 --rc genhtml_branch_coverage=1 00:04:34.547 --rc genhtml_function_coverage=1 00:04:34.547 --rc genhtml_legend=1 00:04:34.547 --rc geninfo_all_blocks=1 00:04:34.547 --rc geninfo_unexecuted_blocks=1 00:04:34.547 00:04:34.547 ' 00:04:34.547 10:36:34 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:34.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.547 --rc genhtml_branch_coverage=1 00:04:34.547 --rc genhtml_function_coverage=1 00:04:34.547 --rc genhtml_legend=1 00:04:34.547 --rc geninfo_all_blocks=1 00:04:34.547 --rc geninfo_unexecuted_blocks=1 00:04:34.547 00:04:34.547 ' 00:04:34.547 10:36:34 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:34.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.547 --rc genhtml_branch_coverage=1 00:04:34.547 --rc genhtml_function_coverage=1 00:04:34.547 --rc genhtml_legend=1 00:04:34.547 --rc geninfo_all_blocks=1 00:04:34.547 --rc geninfo_unexecuted_blocks=1 00:04:34.547 00:04:34.547 ' 00:04:34.547 10:36:34 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:34.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.547 --rc genhtml_branch_coverage=1 00:04:34.547 --rc genhtml_function_coverage=1 00:04:34.547 --rc genhtml_legend=1 00:04:34.547 --rc geninfo_all_blocks=1 00:04:34.547 --rc geninfo_unexecuted_blocks=1 00:04:34.547 00:04:34.547 ' 00:04:34.547 10:36:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:34.547 10:36:34 -- nvmf/common.sh@7 -- # uname -s 00:04:34.547 10:36:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.547 10:36:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.547 10:36:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.547 10:36:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.547 10:36:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.547 10:36:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.547 10:36:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.547 10:36:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.547 10:36:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.547 10:36:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.547 10:36:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f1bae93-a412-4c9d-ba56-7b4de9b8b370 00:04:34.547 10:36:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=2f1bae93-a412-4c9d-ba56-7b4de9b8b370 00:04:34.547 10:36:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.547 10:36:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.547 10:36:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.547 10:36:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.547 10:36:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:34.547 10:36:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:34.547 10:36:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.547 10:36:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.547 10:36:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.547 10:36:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.547 10:36:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.547 10:36:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.547 10:36:34 -- paths/export.sh@5 -- # export PATH 00:04:34.547 10:36:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.547 10:36:34 -- nvmf/common.sh@51 -- # : 0 00:04:34.548 10:36:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:34.548 10:36:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:34.548 10:36:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.548 10:36:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.548 10:36:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.548 10:36:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:34.548 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:34.548 10:36:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:34.548 10:36:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:34.548 10:36:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:34.548 10:36:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:34.548 10:36:34 -- spdk/autotest.sh@32 -- # uname -s 00:04:34.548 10:36:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:34.548 10:36:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:34.548 10:36:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:34.548 10:36:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:34.548 10:36:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:34.548 10:36:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:34.548 10:36:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:34.548 10:36:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:34.548 10:36:34 -- spdk/autotest.sh@48 -- # udevadm_pid=66605 00:04:34.548 10:36:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:34.548 10:36:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:34.548 10:36:34 -- pm/common@17 -- # local monitor 00:04:34.548 10:36:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.548 10:36:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.548 10:36:34 -- pm/common@25 -- # sleep 1 00:04:34.548 10:36:34 -- pm/common@21 -- # date +%s 00:04:34.548 10:36:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734345394 00:04:34.548 10:36:34 -- pm/common@21 -- # date +%s 00:04:34.548 10:36:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734345394 00:04:34.548 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734345394_collect-vmstat.pm.log 00:04:34.548 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734345394_collect-cpu-load.pm.log 00:04:35.482 10:36:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:35.482 10:36:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:35.482 10:36:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.482 10:36:35 -- common/autotest_common.sh@10 -- # set +x 00:04:35.482 10:36:35 -- spdk/autotest.sh@59 -- # create_test_list 00:04:35.482 10:36:35 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:35.482 10:36:35 -- common/autotest_common.sh@10 -- # set +x 00:04:35.740 10:36:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:35.740 10:36:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:35.740 10:36:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:35.740 10:36:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:35.740 10:36:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:35.740 10:36:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:35.740 10:36:35 -- common/autotest_common.sh@1455 -- # uname 00:04:35.740 10:36:35 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:35.740 10:36:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:35.740 10:36:35 -- common/autotest_common.sh@1475 -- # uname 00:04:35.740 10:36:35 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:35.740 10:36:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:35.740 10:36:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:35.740 lcov: LCOV version 1.15 00:04:35.740 10:36:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:47.960 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:47.961 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:02.836 10:37:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:02.836 10:37:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.836 10:37:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.836 10:37:01 -- spdk/autotest.sh@78 -- # rm -f 00:05:02.836 10:37:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.836 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:02.836 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:02.836 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:02.836 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:02.836 10:37:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:02.836 10:37:02 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:02.836 10:37:02 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:02.836 10:37:02 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:02.836 10:37:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:02.836 10:37:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:02.836 10:37:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:02.836 10:37:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:02.836 10:37:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:02.836 10:37:02 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:02.836 10:37:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:02.836 10:37:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:02.836 10:37:02 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:02.836 10:37:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:02.836 10:37:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:02.836 10:37:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:02.836 10:37:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:02.836 10:37:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:02.836 10:37:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:02.836 10:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.836 10:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.836 10:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:02.836 10:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:02.836 10:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:02.836 No valid GPT data, bailing 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # pt= 00:05:02.836 10:37:02 -- scripts/common.sh@395 -- # return 1 00:05:02.836 10:37:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:02.836 1+0 records in 00:05:02.836 1+0 records out 00:05:02.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186006 s, 56.4 MB/s 00:05:02.836 10:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.836 10:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.836 10:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:02.836 10:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:02.836 10:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:02.836 No valid GPT data, bailing 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # pt= 00:05:02.836 10:37:02 -- scripts/common.sh@395 -- # return 1 00:05:02.836 10:37:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:02.836 1+0 records in 00:05:02.836 1+0 records out 00:05:02.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589235 s, 178 MB/s 00:05:02.836 10:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.836 10:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.836 10:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:02.836 10:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:02.836 10:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:02.836 No valid GPT data, bailing 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # pt= 00:05:02.836 10:37:02 -- scripts/common.sh@395 -- # return 1 00:05:02.836 10:37:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:02.836 1+0 records in 00:05:02.836 1+0 records out 00:05:02.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482286 s, 217 MB/s 00:05:02.836 10:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.836 10:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.836 10:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:02.836 10:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:02.836 10:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:02.836 No valid GPT data, bailing 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # pt= 00:05:02.836 10:37:02 -- scripts/common.sh@395 -- # return 1 00:05:02.836 10:37:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:02.836 1+0 records in 00:05:02.836 1+0 records out 00:05:02.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408812 s, 256 MB/s 00:05:02.836 10:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.836 10:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.836 10:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:02.836 10:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:02.836 10:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:02.836 No valid GPT data, bailing 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # pt= 00:05:02.836 10:37:02 -- scripts/common.sh@395 -- # return 1 00:05:02.836 10:37:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:02.836 1+0 records in 00:05:02.836 1+0 records out 00:05:02.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548025 s, 191 MB/s 00:05:02.836 10:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.836 10:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.836 10:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:02.836 10:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:02.836 10:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:02.836 No valid GPT data, bailing 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:02.836 10:37:02 -- scripts/common.sh@394 -- # pt= 00:05:02.836 10:37:02 -- scripts/common.sh@395 -- # return 1 00:05:02.836 10:37:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:02.836 1+0 records in 00:05:02.836 1+0 records out 00:05:02.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00559062 s, 188 MB/s 00:05:02.836 10:37:02 -- spdk/autotest.sh@105 -- # sync 00:05:03.096 10:37:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:03.097 10:37:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:03.097 10:37:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:05.010 10:37:04 -- spdk/autotest.sh@111 -- # uname -s 00:05:05.010 10:37:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:05.010 10:37:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:05.010 10:37:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:05.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.844 Hugepages 00:05:05.844 node hugesize free / total 00:05:05.844 node0 1048576kB 0 / 0 00:05:05.844 node0 2048kB 0 / 0 00:05:05.844 00:05:05.844 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.844 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:05.844 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:05.844 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:05.844 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:06.106 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:06.106 10:37:05 -- spdk/autotest.sh@117 -- # uname -s 00:05:06.106 10:37:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:06.106 10:37:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:06.106 10:37:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.937 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.198 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.198 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.198 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.198 10:37:07 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:08.139 10:37:08 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:08.139 10:37:08 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:08.139 10:37:08 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:08.139 10:37:08 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:08.139 10:37:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:08.139 10:37:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:08.139 10:37:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.139 10:37:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:08.139 10:37:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:08.139 10:37:08 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:08.139 10:37:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:08.139 10:37:08 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.710 Waiting for block devices as requested 00:05:08.710 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:08.710 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:08.969 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:08.969 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:14.284 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:14.284 10:37:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:14.284 10:37:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:14.284 10:37:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:14.284 10:37:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:14.284 10:37:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:14.284 10:37:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:14.284 10:37:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:14.284 10:37:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:14.284 10:37:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:14.284 10:37:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:14.284 10:37:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:14.284 10:37:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:14.284 10:37:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:14.284 10:37:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:14.284 10:37:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:14.284 10:37:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:14.284 10:37:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:14.284 10:37:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:14.284 10:37:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:14.284 10:37:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:14.284 10:37:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:14.284 10:37:13 -- common/autotest_common.sh@1541 -- # continue 00:05:14.284 10:37:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:14.284 10:37:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:14.284 10:37:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:14.284 10:37:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:14.284 10:37:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:14.284 10:37:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:14.284 10:37:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:14.284 10:37:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:14.284 10:37:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:14.284 10:37:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:14.284 10:37:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:14.284 10:37:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:14.284 10:37:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:14.284 10:37:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:14.284 10:37:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:14.284 10:37:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:14.284 10:37:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1541 -- # continue 00:05:14.284 10:37:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:14.284 10:37:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:14.284 10:37:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:14.284 10:37:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:14.284 10:37:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:14.284 10:37:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:14.284 10:37:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:14.284 10:37:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:14.284 10:37:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1541 -- # continue 00:05:14.284 10:37:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:14.284 10:37:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:14.284 10:37:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:14.284 10:37:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:14.284 10:37:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:14.284 10:37:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:14.284 10:37:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:14.284 10:37:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:14.284 10:37:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:14.284 10:37:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:14.284 10:37:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:14.284 10:37:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:14.284 10:37:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:14.284 10:37:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:14.284 10:37:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:14.284 10:37:14 -- common/autotest_common.sh@1541 -- # continue 00:05:14.284 10:37:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:14.284 10:37:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.284 10:37:14 -- common/autotest_common.sh@10 -- # set +x 00:05:14.284 10:37:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:14.284 10:37:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.284 10:37:14 -- common/autotest_common.sh@10 -- # set +x 00:05:14.285 10:37:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.429 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.429 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.429 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.429 10:37:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:15.429 10:37:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.429 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:05:15.429 10:37:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:15.429 10:37:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:15.429 10:37:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:15.429 10:37:15 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:15.429 10:37:15 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:15.429 10:37:15 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:15.429 10:37:15 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:15.429 10:37:15 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:15.429 10:37:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:15.429 10:37:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:15.429 10:37:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:15.429 10:37:15 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:15.429 10:37:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:15.429 10:37:15 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:15.429 10:37:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:15.429 10:37:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:15.429 10:37:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:15.429 10:37:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:15.429 10:37:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:15.429 10:37:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:15.429 10:37:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:15.429 10:37:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:15.429 10:37:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:15.429 10:37:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:15.429 10:37:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:15.429 10:37:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:15.429 10:37:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:15.429 10:37:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:15.429 10:37:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:15.429 10:37:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:15.429 10:37:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:15.429 10:37:15 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:15.429 10:37:15 -- common/autotest_common.sh@1570 -- # return 0 00:05:15.429 10:37:15 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:15.429 10:37:15 -- common/autotest_common.sh@1578 -- # return 0 00:05:15.429 10:37:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:15.429 10:37:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:15.429 10:37:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:15.429 10:37:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:15.429 10:37:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:15.689 10:37:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.689 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:05:15.689 10:37:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:15.689 10:37:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:15.689 10:37:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.689 10:37:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.689 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:05:15.689 ************************************ 00:05:15.689 START TEST env 00:05:15.689 ************************************ 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:15.689 * Looking for test storage... 00:05:15.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:15.689 10:37:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.689 10:37:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.689 10:37:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.689 10:37:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.689 10:37:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.689 10:37:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.689 10:37:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.689 10:37:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.689 10:37:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.689 10:37:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.689 10:37:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.689 10:37:15 env -- scripts/common.sh@344 -- # case "$op" in 00:05:15.689 10:37:15 env -- scripts/common.sh@345 -- # : 1 00:05:15.689 10:37:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.689 10:37:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.689 10:37:15 env -- scripts/common.sh@365 -- # decimal 1 00:05:15.689 10:37:15 env -- scripts/common.sh@353 -- # local d=1 00:05:15.689 10:37:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.689 10:37:15 env -- scripts/common.sh@355 -- # echo 1 00:05:15.689 10:37:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.689 10:37:15 env -- scripts/common.sh@366 -- # decimal 2 00:05:15.689 10:37:15 env -- scripts/common.sh@353 -- # local d=2 00:05:15.689 10:37:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.689 10:37:15 env -- scripts/common.sh@355 -- # echo 2 00:05:15.689 10:37:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.689 10:37:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.689 10:37:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.689 10:37:15 env -- scripts/common.sh@368 -- # return 0 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:15.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.689 --rc genhtml_branch_coverage=1 00:05:15.689 --rc genhtml_function_coverage=1 00:05:15.689 --rc genhtml_legend=1 00:05:15.689 --rc geninfo_all_blocks=1 00:05:15.689 --rc geninfo_unexecuted_blocks=1 00:05:15.689 00:05:15.689 ' 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:15.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.689 --rc genhtml_branch_coverage=1 00:05:15.689 --rc genhtml_function_coverage=1 00:05:15.689 --rc genhtml_legend=1 00:05:15.689 --rc geninfo_all_blocks=1 00:05:15.689 --rc geninfo_unexecuted_blocks=1 00:05:15.689 00:05:15.689 ' 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:15.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.689 --rc genhtml_branch_coverage=1 00:05:15.689 --rc genhtml_function_coverage=1 00:05:15.689 --rc genhtml_legend=1 00:05:15.689 --rc geninfo_all_blocks=1 00:05:15.689 --rc geninfo_unexecuted_blocks=1 00:05:15.689 00:05:15.689 ' 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:15.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.689 --rc genhtml_branch_coverage=1 00:05:15.689 --rc genhtml_function_coverage=1 00:05:15.689 --rc genhtml_legend=1 00:05:15.689 --rc geninfo_all_blocks=1 00:05:15.689 --rc geninfo_unexecuted_blocks=1 00:05:15.689 00:05:15.689 ' 00:05:15.689 10:37:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.689 10:37:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.689 10:37:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.689 ************************************ 00:05:15.689 START TEST env_memory 00:05:15.689 ************************************ 00:05:15.689 10:37:15 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:15.689 00:05:15.689 00:05:15.689 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.689 http://cunit.sourceforge.net/ 00:05:15.689 00:05:15.689 00:05:15.689 Suite: memory 00:05:15.689 Test: alloc and free memory map ...[2024-12-16 10:37:15.654917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:15.949 passed 00:05:15.949 Test: mem map translation ...[2024-12-16 10:37:15.693974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:15.949 [2024-12-16 10:37:15.694010] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:15.949 [2024-12-16 10:37:15.694066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:15.949 [2024-12-16 10:37:15.694080] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:15.949 passed 00:05:15.949 Test: mem map registration ...[2024-12-16 10:37:15.762017] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:15.950 [2024-12-16 10:37:15.762047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:15.950 passed 00:05:15.950 Test: mem map adjacent registrations ...passed 00:05:15.950 00:05:15.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.950 suites 1 1 n/a 0 0 00:05:15.950 tests 4 4 4 0 0 00:05:15.950 asserts 152 152 152 0 n/a 00:05:15.950 00:05:15.950 Elapsed time = 0.233 seconds 00:05:15.950 00:05:15.950 real 0m0.269s 00:05:15.950 user 0m0.243s 00:05:15.950 sys 0m0.019s 00:05:15.950 10:37:15 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.950 10:37:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:15.950 ************************************ 00:05:15.950 END TEST env_memory 00:05:15.950 ************************************ 00:05:15.950 10:37:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:15.950 10:37:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.950 10:37:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.950 10:37:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.950 ************************************ 00:05:15.950 START TEST env_vtophys 00:05:15.950 ************************************ 00:05:15.950 10:37:15 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:16.211 EAL: lib.eal log level changed from notice to debug 00:05:16.211 EAL: Detected lcore 0 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 1 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 2 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 3 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 4 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 5 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 6 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 7 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 8 as core 0 on socket 0 00:05:16.211 EAL: Detected lcore 9 as core 0 on socket 0 00:05:16.211 EAL: Maximum logical cores by configuration: 128 00:05:16.211 EAL: Detected CPU lcores: 10 00:05:16.211 EAL: Detected NUMA nodes: 1 00:05:16.211 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:16.211 EAL: Detected shared linkage of DPDK 00:05:16.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:16.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:16.211 EAL: Registered [vdev] bus. 00:05:16.211 EAL: bus.vdev log level changed from disabled to notice 00:05:16.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:16.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:16.211 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:16.211 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:16.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:16.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:16.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:16.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:16.211 EAL: No shared files mode enabled, IPC will be disabled 00:05:16.211 EAL: No shared files mode enabled, IPC is disabled 00:05:16.211 EAL: Selected IOVA mode 'PA' 00:05:16.211 EAL: Probing VFIO support... 00:05:16.211 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:16.211 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:16.211 EAL: Ask a virtual area of 0x2e000 bytes 00:05:16.211 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:16.211 EAL: Setting up physically contiguous memory... 00:05:16.211 EAL: Setting maximum number of open files to 524288 00:05:16.211 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:16.211 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:16.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.211 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:16.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.211 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:16.211 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:16.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.211 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:16.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.211 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:16.211 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:16.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.211 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:16.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.211 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:16.211 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:16.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.211 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:16.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.211 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:16.211 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:16.211 EAL: Hugepages will be freed exactly as allocated. 00:05:16.211 EAL: No shared files mode enabled, IPC is disabled 00:05:16.211 EAL: No shared files mode enabled, IPC is disabled 00:05:16.211 EAL: TSC frequency is ~2600000 KHz 00:05:16.211 EAL: Main lcore 0 is ready (tid=7fb71a40ba40;cpuset=[0]) 00:05:16.211 EAL: Trying to obtain current memory policy. 00:05:16.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.211 EAL: Restoring previous memory policy: 0 00:05:16.211 EAL: request: mp_malloc_sync 00:05:16.211 EAL: No shared files mode enabled, IPC is disabled 00:05:16.211 EAL: Heap on socket 0 was expanded by 2MB 00:05:16.211 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:16.211 EAL: No shared files mode enabled, IPC is disabled 00:05:16.211 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:16.211 EAL: Mem event callback 'spdk:(nil)' registered 00:05:16.211 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:16.211 00:05:16.211 00:05:16.211 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.211 http://cunit.sourceforge.net/ 00:05:16.211 00:05:16.211 00:05:16.211 Suite: components_suite 00:05:16.473 Test: vtophys_malloc_test ...passed 00:05:16.473 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:16.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.473 EAL: Restoring previous memory policy: 4 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was expanded by 4MB 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was shrunk by 4MB 00:05:16.473 EAL: Trying to obtain current memory policy. 00:05:16.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.473 EAL: Restoring previous memory policy: 4 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was expanded by 6MB 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was shrunk by 6MB 00:05:16.473 EAL: Trying to obtain current memory policy. 00:05:16.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.473 EAL: Restoring previous memory policy: 4 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was expanded by 10MB 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was shrunk by 10MB 00:05:16.473 EAL: Trying to obtain current memory policy. 00:05:16.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.473 EAL: Restoring previous memory policy: 4 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was expanded by 18MB 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was shrunk by 18MB 00:05:16.473 EAL: Trying to obtain current memory policy. 00:05:16.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.473 EAL: Restoring previous memory policy: 4 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was expanded by 34MB 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was shrunk by 34MB 00:05:16.473 EAL: Trying to obtain current memory policy. 00:05:16.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.473 EAL: Restoring previous memory policy: 4 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was expanded by 66MB 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was shrunk by 66MB 00:05:16.473 EAL: Trying to obtain current memory policy. 00:05:16.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.473 EAL: Restoring previous memory policy: 4 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was expanded by 130MB 00:05:16.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.473 EAL: request: mp_malloc_sync 00:05:16.473 EAL: No shared files mode enabled, IPC is disabled 00:05:16.473 EAL: Heap on socket 0 was shrunk by 130MB 00:05:16.473 EAL: Trying to obtain current memory policy. 00:05:16.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.734 EAL: Restoring previous memory policy: 4 00:05:16.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.734 EAL: request: mp_malloc_sync 00:05:16.734 EAL: No shared files mode enabled, IPC is disabled 00:05:16.734 EAL: Heap on socket 0 was expanded by 258MB 00:05:16.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.734 EAL: request: mp_malloc_sync 00:05:16.734 EAL: No shared files mode enabled, IPC is disabled 00:05:16.734 EAL: Heap on socket 0 was shrunk by 258MB 00:05:16.734 EAL: Trying to obtain current memory policy. 00:05:16.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.734 EAL: Restoring previous memory policy: 4 00:05:16.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.734 EAL: request: mp_malloc_sync 00:05:16.734 EAL: No shared files mode enabled, IPC is disabled 00:05:16.734 EAL: Heap on socket 0 was expanded by 514MB 00:05:16.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.734 EAL: request: mp_malloc_sync 00:05:16.734 EAL: No shared files mode enabled, IPC is disabled 00:05:16.734 EAL: Heap on socket 0 was shrunk by 514MB 00:05:16.734 EAL: Trying to obtain current memory policy. 00:05:16.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.995 EAL: Restoring previous memory policy: 4 00:05:16.995 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.995 EAL: request: mp_malloc_sync 00:05:16.995 EAL: No shared files mode enabled, IPC is disabled 00:05:16.995 EAL: Heap on socket 0 was expanded by 1026MB 00:05:16.995 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.256 passed 00:05:17.256 00:05:17.256 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.256 suites 1 1 n/a 0 0 00:05:17.256 tests 2 2 2 0 0 00:05:17.256 asserts 5400 5400 5400 0 n/a 00:05:17.256 00:05:17.256 Elapsed time = 0.961 seconds 00:05:17.256 EAL: request: mp_malloc_sync 00:05:17.256 EAL: No shared files mode enabled, IPC is disabled 00:05:17.256 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:17.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.256 EAL: request: mp_malloc_sync 00:05:17.256 EAL: No shared files mode enabled, IPC is disabled 00:05:17.256 EAL: Heap on socket 0 was shrunk by 2MB 00:05:17.256 EAL: No shared files mode enabled, IPC is disabled 00:05:17.256 EAL: No shared files mode enabled, IPC is disabled 00:05:17.256 EAL: No shared files mode enabled, IPC is disabled 00:05:17.256 00:05:17.256 real 0m1.166s 00:05:17.256 user 0m0.465s 00:05:17.256 sys 0m0.573s 00:05:17.256 10:37:17 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.256 ************************************ 00:05:17.256 END TEST env_vtophys 00:05:17.256 ************************************ 00:05:17.256 10:37:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 10:37:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:17.256 10:37:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.256 10:37:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.256 10:37:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 ************************************ 00:05:17.256 START TEST env_pci 00:05:17.256 ************************************ 00:05:17.256 10:37:17 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:17.256 00:05:17.256 00:05:17.256 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.256 http://cunit.sourceforge.net/ 00:05:17.256 00:05:17.256 00:05:17.256 Suite: pci 00:05:17.256 Test: pci_hook ...[2024-12-16 10:37:17.181904] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69330 has claimed it 00:05:17.256 passed 00:05:17.256 00:05:17.256 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.256 suites 1 1 n/a 0 0 00:05:17.256 tests 1 1 1 0 0 00:05:17.256 asserts 25 25 25 0 n/a 00:05:17.256 00:05:17.256 Elapsed time = 0.003 seconds 00:05:17.256 EAL: Cannot find device (10000:00:01.0) 00:05:17.256 EAL: Failed to attach device on primary process 00:05:17.256 00:05:17.256 real 0m0.057s 00:05:17.256 user 0m0.027s 00:05:17.256 sys 0m0.029s 00:05:17.256 10:37:17 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.256 10:37:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 ************************************ 00:05:17.256 END TEST env_pci 00:05:17.256 ************************************ 00:05:17.517 10:37:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:17.517 10:37:17 env -- env/env.sh@15 -- # uname 00:05:17.517 10:37:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:17.517 10:37:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:17.517 10:37:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:17.517 10:37:17 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:17.517 10:37:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.517 10:37:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.517 ************************************ 00:05:17.517 START TEST env_dpdk_post_init 00:05:17.517 ************************************ 00:05:17.517 10:37:17 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:17.517 EAL: Detected CPU lcores: 10 00:05:17.517 EAL: Detected NUMA nodes: 1 00:05:17.517 EAL: Detected shared linkage of DPDK 00:05:17.517 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.517 EAL: Selected IOVA mode 'PA' 00:05:17.517 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:17.518 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:17.518 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:17.518 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:17.518 Starting DPDK initialization... 00:05:17.518 Starting SPDK post initialization... 00:05:17.518 SPDK NVMe probe 00:05:17.518 Attaching to 0000:00:10.0 00:05:17.518 Attaching to 0000:00:11.0 00:05:17.518 Attaching to 0000:00:12.0 00:05:17.518 Attaching to 0000:00:13.0 00:05:17.518 Attached to 0000:00:13.0 00:05:17.518 Attached to 0000:00:10.0 00:05:17.518 Attached to 0000:00:11.0 00:05:17.518 Attached to 0000:00:12.0 00:05:17.518 Cleaning up... 00:05:17.518 ************************************ 00:05:17.518 END TEST env_dpdk_post_init 00:05:17.518 ************************************ 00:05:17.518 00:05:17.518 real 0m0.201s 00:05:17.518 user 0m0.053s 00:05:17.518 sys 0m0.049s 00:05:17.518 10:37:17 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.518 10:37:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.779 10:37:17 env -- env/env.sh@26 -- # uname 00:05:17.779 10:37:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:17.779 10:37:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.779 10:37:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.779 10:37:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.779 10:37:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.779 ************************************ 00:05:17.779 START TEST env_mem_callbacks 00:05:17.779 ************************************ 00:05:17.779 10:37:17 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.779 EAL: Detected CPU lcores: 10 00:05:17.779 EAL: Detected NUMA nodes: 1 00:05:17.779 EAL: Detected shared linkage of DPDK 00:05:17.779 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.779 EAL: Selected IOVA mode 'PA' 00:05:17.779 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.779 00:05:17.779 00:05:17.779 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.779 http://cunit.sourceforge.net/ 00:05:17.779 00:05:17.779 00:05:17.779 Suite: memory 00:05:17.779 Test: test ... 00:05:17.779 register 0x200000200000 2097152 00:05:17.779 malloc 3145728 00:05:17.779 register 0x200000400000 4194304 00:05:17.779 buf 0x200000500000 len 3145728 PASSED 00:05:17.779 malloc 64 00:05:17.779 buf 0x2000004fff40 len 64 PASSED 00:05:17.779 malloc 4194304 00:05:17.779 register 0x200000800000 6291456 00:05:17.779 buf 0x200000a00000 len 4194304 PASSED 00:05:17.779 free 0x200000500000 3145728 00:05:17.779 free 0x2000004fff40 64 00:05:17.779 unregister 0x200000400000 4194304 PASSED 00:05:17.779 free 0x200000a00000 4194304 00:05:17.779 unregister 0x200000800000 6291456 PASSED 00:05:17.779 malloc 8388608 00:05:17.779 register 0x200000400000 10485760 00:05:17.779 buf 0x200000600000 len 8388608 PASSED 00:05:17.779 free 0x200000600000 8388608 00:05:17.779 unregister 0x200000400000 10485760 PASSED 00:05:17.779 passed 00:05:17.779 00:05:17.779 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.779 suites 1 1 n/a 0 0 00:05:17.779 tests 1 1 1 0 0 00:05:17.779 asserts 15 15 15 0 n/a 00:05:17.779 00:05:17.779 Elapsed time = 0.006 seconds 00:05:17.779 00:05:17.779 real 0m0.149s 00:05:17.779 user 0m0.023s 00:05:17.779 sys 0m0.023s 00:05:17.779 10:37:17 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.779 ************************************ 00:05:17.779 END TEST env_mem_callbacks 00:05:17.779 ************************************ 00:05:17.779 10:37:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:17.779 ************************************ 00:05:17.779 END TEST env 00:05:17.779 ************************************ 00:05:17.779 00:05:17.779 real 0m2.298s 00:05:17.779 user 0m0.950s 00:05:17.779 sys 0m0.906s 00:05:17.779 10:37:17 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.779 10:37:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.041 10:37:17 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:18.041 10:37:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.041 10:37:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.041 10:37:17 -- common/autotest_common.sh@10 -- # set +x 00:05:18.041 ************************************ 00:05:18.041 START TEST rpc 00:05:18.041 ************************************ 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:18.041 * Looking for test storage... 00:05:18.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.041 10:37:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.041 10:37:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.041 10:37:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.041 10:37:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.041 10:37:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.041 10:37:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.041 10:37:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.041 10:37:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.041 10:37:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.041 10:37:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.041 10:37:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.041 10:37:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.041 10:37:17 rpc -- scripts/common.sh@345 -- # : 1 00:05:18.041 10:37:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.041 10:37:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.041 10:37:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.041 10:37:17 rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.041 10:37:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.041 10:37:17 rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.041 10:37:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.041 10:37:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.041 10:37:17 rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.041 10:37:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.041 10:37:17 rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.041 10:37:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.041 10:37:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.041 10:37:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.041 10:37:17 rpc -- scripts/common.sh@368 -- # return 0 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.041 --rc genhtml_branch_coverage=1 00:05:18.041 --rc genhtml_function_coverage=1 00:05:18.041 --rc genhtml_legend=1 00:05:18.041 --rc geninfo_all_blocks=1 00:05:18.041 --rc geninfo_unexecuted_blocks=1 00:05:18.041 00:05:18.041 ' 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.041 --rc genhtml_branch_coverage=1 00:05:18.041 --rc genhtml_function_coverage=1 00:05:18.041 --rc genhtml_legend=1 00:05:18.041 --rc geninfo_all_blocks=1 00:05:18.041 --rc geninfo_unexecuted_blocks=1 00:05:18.041 00:05:18.041 ' 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.041 --rc genhtml_branch_coverage=1 00:05:18.041 --rc genhtml_function_coverage=1 00:05:18.041 --rc genhtml_legend=1 00:05:18.041 --rc geninfo_all_blocks=1 00:05:18.041 --rc geninfo_unexecuted_blocks=1 00:05:18.041 00:05:18.041 ' 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.041 --rc genhtml_branch_coverage=1 00:05:18.041 --rc genhtml_function_coverage=1 00:05:18.041 --rc genhtml_legend=1 00:05:18.041 --rc geninfo_all_blocks=1 00:05:18.041 --rc geninfo_unexecuted_blocks=1 00:05:18.041 00:05:18.041 ' 00:05:18.041 10:37:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69457 00:05:18.041 10:37:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.041 10:37:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69457 00:05:18.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@831 -- # '[' -z 69457 ']' 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.041 10:37:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.041 10:37:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.041 [2024-12-16 10:37:18.006942] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:18.042 [2024-12-16 10:37:18.007063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69457 ] 00:05:18.303 [2024-12-16 10:37:18.136618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.303 [2024-12-16 10:37:18.168616] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:18.303 [2024-12-16 10:37:18.168664] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69457' to capture a snapshot of events at runtime. 00:05:18.303 [2024-12-16 10:37:18.168676] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:18.303 [2024-12-16 10:37:18.168683] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:18.303 [2024-12-16 10:37:18.168696] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69457 for offline analysis/debug. 00:05:18.303 [2024-12-16 10:37:18.168730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.874 10:37:18 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.874 10:37:18 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:18.874 10:37:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.874 10:37:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.874 10:37:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:18.874 10:37:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:18.874 10:37:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.874 10:37:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.874 10:37:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.874 ************************************ 00:05:18.874 START TEST rpc_integrity 00:05:18.874 ************************************ 00:05:18.874 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:18.874 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.874 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.874 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.135 { 00:05:19.135 "name": "Malloc0", 00:05:19.135 "aliases": [ 00:05:19.135 "1ab31f6f-1f1f-4dba-a317-6a6af9d1af34" 00:05:19.135 ], 00:05:19.135 "product_name": "Malloc disk", 00:05:19.135 "block_size": 512, 00:05:19.135 "num_blocks": 16384, 00:05:19.135 "uuid": "1ab31f6f-1f1f-4dba-a317-6a6af9d1af34", 00:05:19.135 "assigned_rate_limits": { 00:05:19.135 "rw_ios_per_sec": 0, 00:05:19.135 "rw_mbytes_per_sec": 0, 00:05:19.135 "r_mbytes_per_sec": 0, 00:05:19.135 "w_mbytes_per_sec": 0 00:05:19.135 }, 00:05:19.135 "claimed": false, 00:05:19.135 "zoned": false, 00:05:19.135 "supported_io_types": { 00:05:19.135 "read": true, 00:05:19.135 "write": true, 00:05:19.135 "unmap": true, 00:05:19.135 "flush": true, 00:05:19.135 "reset": true, 00:05:19.135 "nvme_admin": false, 00:05:19.135 "nvme_io": false, 00:05:19.135 "nvme_io_md": false, 00:05:19.135 "write_zeroes": true, 00:05:19.135 "zcopy": true, 00:05:19.135 "get_zone_info": false, 00:05:19.135 "zone_management": false, 00:05:19.135 "zone_append": false, 00:05:19.135 "compare": false, 00:05:19.135 "compare_and_write": false, 00:05:19.135 "abort": true, 00:05:19.135 "seek_hole": false, 00:05:19.135 "seek_data": false, 00:05:19.135 "copy": true, 00:05:19.135 "nvme_iov_md": false 00:05:19.135 }, 00:05:19.135 "memory_domains": [ 00:05:19.135 { 00:05:19.135 "dma_device_id": "system", 00:05:19.135 "dma_device_type": 1 00:05:19.135 }, 00:05:19.135 { 00:05:19.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.135 "dma_device_type": 2 00:05:19.135 } 00:05:19.135 ], 00:05:19.135 "driver_specific": {} 00:05:19.135 } 00:05:19.135 ]' 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.135 [2024-12-16 10:37:18.960619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:19.135 [2024-12-16 10:37:18.960680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.135 [2024-12-16 10:37:18.960705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:19.135 [2024-12-16 10:37:18.960718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.135 [2024-12-16 10:37:18.962962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.135 [2024-12-16 10:37:18.962996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.135 Passthru0 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.135 10:37:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.135 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.135 { 00:05:19.135 "name": "Malloc0", 00:05:19.135 "aliases": [ 00:05:19.135 "1ab31f6f-1f1f-4dba-a317-6a6af9d1af34" 00:05:19.135 ], 00:05:19.135 "product_name": "Malloc disk", 00:05:19.135 "block_size": 512, 00:05:19.135 "num_blocks": 16384, 00:05:19.135 "uuid": "1ab31f6f-1f1f-4dba-a317-6a6af9d1af34", 00:05:19.135 "assigned_rate_limits": { 00:05:19.135 "rw_ios_per_sec": 0, 00:05:19.135 "rw_mbytes_per_sec": 0, 00:05:19.135 "r_mbytes_per_sec": 0, 00:05:19.135 "w_mbytes_per_sec": 0 00:05:19.135 }, 00:05:19.135 "claimed": true, 00:05:19.135 "claim_type": "exclusive_write", 00:05:19.135 "zoned": false, 00:05:19.135 "supported_io_types": { 00:05:19.135 "read": true, 00:05:19.135 "write": true, 00:05:19.135 "unmap": true, 00:05:19.135 "flush": true, 00:05:19.135 "reset": true, 00:05:19.135 "nvme_admin": false, 00:05:19.135 "nvme_io": false, 00:05:19.135 "nvme_io_md": false, 00:05:19.135 "write_zeroes": true, 00:05:19.135 "zcopy": true, 00:05:19.135 "get_zone_info": false, 00:05:19.135 "zone_management": false, 00:05:19.135 "zone_append": false, 00:05:19.135 "compare": false, 00:05:19.135 "compare_and_write": false, 00:05:19.135 "abort": true, 00:05:19.135 "seek_hole": false, 00:05:19.135 "seek_data": false, 00:05:19.135 "copy": true, 00:05:19.135 "nvme_iov_md": false 00:05:19.135 }, 00:05:19.135 "memory_domains": [ 00:05:19.135 { 00:05:19.135 "dma_device_id": "system", 00:05:19.135 "dma_device_type": 1 00:05:19.135 }, 00:05:19.135 { 00:05:19.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.135 "dma_device_type": 2 00:05:19.135 } 00:05:19.135 ], 00:05:19.135 "driver_specific": {} 00:05:19.135 }, 00:05:19.135 { 00:05:19.135 "name": "Passthru0", 00:05:19.135 "aliases": [ 00:05:19.135 "34de7cd9-67ac-59a5-89eb-88cf4a35855f" 00:05:19.135 ], 00:05:19.136 "product_name": "passthru", 00:05:19.136 "block_size": 512, 00:05:19.136 "num_blocks": 16384, 00:05:19.136 "uuid": "34de7cd9-67ac-59a5-89eb-88cf4a35855f", 00:05:19.136 "assigned_rate_limits": { 00:05:19.136 "rw_ios_per_sec": 0, 00:05:19.136 "rw_mbytes_per_sec": 0, 00:05:19.136 "r_mbytes_per_sec": 0, 00:05:19.136 "w_mbytes_per_sec": 0 00:05:19.136 }, 00:05:19.136 "claimed": false, 00:05:19.136 "zoned": false, 00:05:19.136 "supported_io_types": { 00:05:19.136 "read": true, 00:05:19.136 "write": true, 00:05:19.136 "unmap": true, 00:05:19.136 "flush": true, 00:05:19.136 "reset": true, 00:05:19.136 "nvme_admin": false, 00:05:19.136 "nvme_io": false, 00:05:19.136 "nvme_io_md": false, 00:05:19.136 "write_zeroes": true, 00:05:19.136 "zcopy": true, 00:05:19.136 "get_zone_info": false, 00:05:19.136 "zone_management": false, 00:05:19.136 "zone_append": false, 00:05:19.136 "compare": false, 00:05:19.136 "compare_and_write": false, 00:05:19.136 "abort": true, 00:05:19.136 "seek_hole": false, 00:05:19.136 "seek_data": false, 00:05:19.136 "copy": true, 00:05:19.136 "nvme_iov_md": false 00:05:19.136 }, 00:05:19.136 "memory_domains": [ 00:05:19.136 { 00:05:19.136 "dma_device_id": "system", 00:05:19.136 "dma_device_type": 1 00:05:19.136 }, 00:05:19.136 { 00:05:19.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.136 "dma_device_type": 2 00:05:19.136 } 00:05:19.136 ], 00:05:19.136 "driver_specific": { 00:05:19.136 "passthru": { 00:05:19.136 "name": "Passthru0", 00:05:19.136 "base_bdev_name": "Malloc0" 00:05:19.136 } 00:05:19.136 } 00:05:19.136 } 00:05:19.136 ]' 00:05:19.136 10:37:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.136 10:37:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.136 10:37:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.136 10:37:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.136 10:37:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.136 10:37:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.136 10:37:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:19.136 10:37:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.136 00:05:19.136 real 0m0.221s 00:05:19.136 user 0m0.123s 00:05:19.136 sys 0m0.036s 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.136 ************************************ 00:05:19.136 END TEST rpc_integrity 00:05:19.136 ************************************ 00:05:19.136 10:37:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.136 10:37:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:19.136 10:37:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.136 10:37:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.136 10:37:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 ************************************ 00:05:19.394 START TEST rpc_plugins 00:05:19.394 ************************************ 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:19.394 { 00:05:19.394 "name": "Malloc1", 00:05:19.394 "aliases": [ 00:05:19.394 "c1e7f275-da08-41f1-962b-64f0cd502271" 00:05:19.394 ], 00:05:19.394 "product_name": "Malloc disk", 00:05:19.394 "block_size": 4096, 00:05:19.394 "num_blocks": 256, 00:05:19.394 "uuid": "c1e7f275-da08-41f1-962b-64f0cd502271", 00:05:19.394 "assigned_rate_limits": { 00:05:19.394 "rw_ios_per_sec": 0, 00:05:19.394 "rw_mbytes_per_sec": 0, 00:05:19.394 "r_mbytes_per_sec": 0, 00:05:19.394 "w_mbytes_per_sec": 0 00:05:19.394 }, 00:05:19.394 "claimed": false, 00:05:19.394 "zoned": false, 00:05:19.394 "supported_io_types": { 00:05:19.394 "read": true, 00:05:19.394 "write": true, 00:05:19.394 "unmap": true, 00:05:19.394 "flush": true, 00:05:19.394 "reset": true, 00:05:19.394 "nvme_admin": false, 00:05:19.394 "nvme_io": false, 00:05:19.394 "nvme_io_md": false, 00:05:19.394 "write_zeroes": true, 00:05:19.394 "zcopy": true, 00:05:19.394 "get_zone_info": false, 00:05:19.394 "zone_management": false, 00:05:19.394 "zone_append": false, 00:05:19.394 "compare": false, 00:05:19.394 "compare_and_write": false, 00:05:19.394 "abort": true, 00:05:19.394 "seek_hole": false, 00:05:19.394 "seek_data": false, 00:05:19.394 "copy": true, 00:05:19.394 "nvme_iov_md": false 00:05:19.394 }, 00:05:19.394 "memory_domains": [ 00:05:19.394 { 00:05:19.394 "dma_device_id": "system", 00:05:19.394 "dma_device_type": 1 00:05:19.394 }, 00:05:19.394 { 00:05:19.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.394 "dma_device_type": 2 00:05:19.394 } 00:05:19.394 ], 00:05:19.394 "driver_specific": {} 00:05:19.394 } 00:05:19.394 ]' 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:19.394 10:37:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:19.394 00:05:19.394 real 0m0.116s 00:05:19.394 user 0m0.067s 00:05:19.394 sys 0m0.013s 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.394 ************************************ 00:05:19.394 END TEST rpc_plugins 00:05:19.394 ************************************ 00:05:19.394 10:37:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 10:37:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:19.394 10:37:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.394 10:37:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.394 10:37:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 ************************************ 00:05:19.394 START TEST rpc_trace_cmd_test 00:05:19.394 ************************************ 00:05:19.394 10:37:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:19.394 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:19.394 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:19.394 10:37:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.394 10:37:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 10:37:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.394 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:19.394 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69457", 00:05:19.394 "tpoint_group_mask": "0x8", 00:05:19.394 "iscsi_conn": { 00:05:19.394 "mask": "0x2", 00:05:19.394 "tpoint_mask": "0x0" 00:05:19.394 }, 00:05:19.394 "scsi": { 00:05:19.394 "mask": "0x4", 00:05:19.394 "tpoint_mask": "0x0" 00:05:19.394 }, 00:05:19.394 "bdev": { 00:05:19.394 "mask": "0x8", 00:05:19.394 "tpoint_mask": "0xffffffffffffffff" 00:05:19.394 }, 00:05:19.394 "nvmf_rdma": { 00:05:19.394 "mask": "0x10", 00:05:19.394 "tpoint_mask": "0x0" 00:05:19.394 }, 00:05:19.394 "nvmf_tcp": { 00:05:19.394 "mask": "0x20", 00:05:19.394 "tpoint_mask": "0x0" 00:05:19.394 }, 00:05:19.394 "ftl": { 00:05:19.394 "mask": "0x40", 00:05:19.394 "tpoint_mask": "0x0" 00:05:19.394 }, 00:05:19.394 "blobfs": { 00:05:19.394 "mask": "0x80", 00:05:19.394 "tpoint_mask": "0x0" 00:05:19.394 }, 00:05:19.394 "dsa": { 00:05:19.394 "mask": "0x200", 00:05:19.394 "tpoint_mask": "0x0" 00:05:19.395 }, 00:05:19.395 "thread": { 00:05:19.395 "mask": "0x400", 00:05:19.395 "tpoint_mask": "0x0" 00:05:19.395 }, 00:05:19.395 "nvme_pcie": { 00:05:19.395 "mask": "0x800", 00:05:19.395 "tpoint_mask": "0x0" 00:05:19.395 }, 00:05:19.395 "iaa": { 00:05:19.395 "mask": "0x1000", 00:05:19.395 "tpoint_mask": "0x0" 00:05:19.395 }, 00:05:19.395 "nvme_tcp": { 00:05:19.395 "mask": "0x2000", 00:05:19.395 "tpoint_mask": "0x0" 00:05:19.395 }, 00:05:19.395 "bdev_nvme": { 00:05:19.395 "mask": "0x4000", 00:05:19.395 "tpoint_mask": "0x0" 00:05:19.395 }, 00:05:19.395 "sock": { 00:05:19.395 "mask": "0x8000", 00:05:19.395 "tpoint_mask": "0x0" 00:05:19.395 }, 00:05:19.395 "blob": { 00:05:19.395 "mask": "0x10000", 00:05:19.395 "tpoint_mask": "0x0" 00:05:19.395 }, 00:05:19.395 "bdev_raid": { 00:05:19.395 "mask": "0x20000", 00:05:19.395 "tpoint_mask": "0x0" 00:05:19.395 } 00:05:19.395 }' 00:05:19.395 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:19.395 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:19.395 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:19.654 00:05:19.654 real 0m0.173s 00:05:19.654 user 0m0.136s 00:05:19.654 sys 0m0.024s 00:05:19.654 ************************************ 00:05:19.654 END TEST rpc_trace_cmd_test 00:05:19.654 ************************************ 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.654 10:37:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:19.654 10:37:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:19.654 10:37:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:19.654 10:37:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:19.654 10:37:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.654 10:37:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.654 10:37:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.654 ************************************ 00:05:19.654 START TEST rpc_daemon_integrity 00:05:19.654 ************************************ 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.654 { 00:05:19.654 "name": "Malloc2", 00:05:19.654 "aliases": [ 00:05:19.654 "46587c2d-a2eb-42a9-a0e3-a1804dfa0e8b" 00:05:19.654 ], 00:05:19.654 "product_name": "Malloc disk", 00:05:19.654 "block_size": 512, 00:05:19.654 "num_blocks": 16384, 00:05:19.654 "uuid": "46587c2d-a2eb-42a9-a0e3-a1804dfa0e8b", 00:05:19.654 "assigned_rate_limits": { 00:05:19.654 "rw_ios_per_sec": 0, 00:05:19.654 "rw_mbytes_per_sec": 0, 00:05:19.654 "r_mbytes_per_sec": 0, 00:05:19.654 "w_mbytes_per_sec": 0 00:05:19.654 }, 00:05:19.654 "claimed": false, 00:05:19.654 "zoned": false, 00:05:19.654 "supported_io_types": { 00:05:19.654 "read": true, 00:05:19.654 "write": true, 00:05:19.654 "unmap": true, 00:05:19.654 "flush": true, 00:05:19.654 "reset": true, 00:05:19.654 "nvme_admin": false, 00:05:19.654 "nvme_io": false, 00:05:19.654 "nvme_io_md": false, 00:05:19.654 "write_zeroes": true, 00:05:19.654 "zcopy": true, 00:05:19.654 "get_zone_info": false, 00:05:19.654 "zone_management": false, 00:05:19.654 "zone_append": false, 00:05:19.654 "compare": false, 00:05:19.654 "compare_and_write": false, 00:05:19.654 "abort": true, 00:05:19.654 "seek_hole": false, 00:05:19.654 "seek_data": false, 00:05:19.654 "copy": true, 00:05:19.654 "nvme_iov_md": false 00:05:19.654 }, 00:05:19.654 "memory_domains": [ 00:05:19.654 { 00:05:19.654 "dma_device_id": "system", 00:05:19.654 "dma_device_type": 1 00:05:19.654 }, 00:05:19.654 { 00:05:19.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.654 "dma_device_type": 2 00:05:19.654 } 00:05:19.654 ], 00:05:19.654 "driver_specific": {} 00:05:19.654 } 00:05:19.654 ]' 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.654 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.654 [2024-12-16 10:37:19.640947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:19.654 [2024-12-16 10:37:19.640993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.654 [2024-12-16 10:37:19.641012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:19.654 [2024-12-16 10:37:19.641021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.915 [2024-12-16 10:37:19.643156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.915 [2024-12-16 10:37:19.643189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.915 Passthru0 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.915 { 00:05:19.915 "name": "Malloc2", 00:05:19.915 "aliases": [ 00:05:19.915 "46587c2d-a2eb-42a9-a0e3-a1804dfa0e8b" 00:05:19.915 ], 00:05:19.915 "product_name": "Malloc disk", 00:05:19.915 "block_size": 512, 00:05:19.915 "num_blocks": 16384, 00:05:19.915 "uuid": "46587c2d-a2eb-42a9-a0e3-a1804dfa0e8b", 00:05:19.915 "assigned_rate_limits": { 00:05:19.915 "rw_ios_per_sec": 0, 00:05:19.915 "rw_mbytes_per_sec": 0, 00:05:19.915 "r_mbytes_per_sec": 0, 00:05:19.915 "w_mbytes_per_sec": 0 00:05:19.915 }, 00:05:19.915 "claimed": true, 00:05:19.915 "claim_type": "exclusive_write", 00:05:19.915 "zoned": false, 00:05:19.915 "supported_io_types": { 00:05:19.915 "read": true, 00:05:19.915 "write": true, 00:05:19.915 "unmap": true, 00:05:19.915 "flush": true, 00:05:19.915 "reset": true, 00:05:19.915 "nvme_admin": false, 00:05:19.915 "nvme_io": false, 00:05:19.915 "nvme_io_md": false, 00:05:19.915 "write_zeroes": true, 00:05:19.915 "zcopy": true, 00:05:19.915 "get_zone_info": false, 00:05:19.915 "zone_management": false, 00:05:19.915 "zone_append": false, 00:05:19.915 "compare": false, 00:05:19.915 "compare_and_write": false, 00:05:19.915 "abort": true, 00:05:19.915 "seek_hole": false, 00:05:19.915 "seek_data": false, 00:05:19.915 "copy": true, 00:05:19.915 "nvme_iov_md": false 00:05:19.915 }, 00:05:19.915 "memory_domains": [ 00:05:19.915 { 00:05:19.915 "dma_device_id": "system", 00:05:19.915 "dma_device_type": 1 00:05:19.915 }, 00:05:19.915 { 00:05:19.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.915 "dma_device_type": 2 00:05:19.915 } 00:05:19.915 ], 00:05:19.915 "driver_specific": {} 00:05:19.915 }, 00:05:19.915 { 00:05:19.915 "name": "Passthru0", 00:05:19.915 "aliases": [ 00:05:19.915 "08ad0c52-5efb-5fda-bec2-9e457fb58315" 00:05:19.915 ], 00:05:19.915 "product_name": "passthru", 00:05:19.915 "block_size": 512, 00:05:19.915 "num_blocks": 16384, 00:05:19.915 "uuid": "08ad0c52-5efb-5fda-bec2-9e457fb58315", 00:05:19.915 "assigned_rate_limits": { 00:05:19.915 "rw_ios_per_sec": 0, 00:05:19.915 "rw_mbytes_per_sec": 0, 00:05:19.915 "r_mbytes_per_sec": 0, 00:05:19.915 "w_mbytes_per_sec": 0 00:05:19.915 }, 00:05:19.915 "claimed": false, 00:05:19.915 "zoned": false, 00:05:19.915 "supported_io_types": { 00:05:19.915 "read": true, 00:05:19.915 "write": true, 00:05:19.915 "unmap": true, 00:05:19.915 "flush": true, 00:05:19.915 "reset": true, 00:05:19.915 "nvme_admin": false, 00:05:19.915 "nvme_io": false, 00:05:19.915 "nvme_io_md": false, 00:05:19.915 "write_zeroes": true, 00:05:19.915 "zcopy": true, 00:05:19.915 "get_zone_info": false, 00:05:19.915 "zone_management": false, 00:05:19.915 "zone_append": false, 00:05:19.915 "compare": false, 00:05:19.915 "compare_and_write": false, 00:05:19.915 "abort": true, 00:05:19.915 "seek_hole": false, 00:05:19.915 "seek_data": false, 00:05:19.915 "copy": true, 00:05:19.915 "nvme_iov_md": false 00:05:19.915 }, 00:05:19.915 "memory_domains": [ 00:05:19.915 { 00:05:19.915 "dma_device_id": "system", 00:05:19.915 "dma_device_type": 1 00:05:19.915 }, 00:05:19.915 { 00:05:19.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.915 "dma_device_type": 2 00:05:19.915 } 00:05:19.915 ], 00:05:19.915 "driver_specific": { 00:05:19.915 "passthru": { 00:05:19.915 "name": "Passthru0", 00:05:19.915 "base_bdev_name": "Malloc2" 00:05:19.915 } 00:05:19.915 } 00:05:19.915 } 00:05:19.915 ]' 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:19.915 10:37:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.915 00:05:19.915 real 0m0.225s 00:05:19.915 user 0m0.125s 00:05:19.915 sys 0m0.034s 00:05:19.916 ************************************ 00:05:19.916 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.916 10:37:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.916 END TEST rpc_daemon_integrity 00:05:19.916 ************************************ 00:05:19.916 10:37:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:19.916 10:37:19 rpc -- rpc/rpc.sh@84 -- # killprocess 69457 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@950 -- # '[' -z 69457 ']' 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@954 -- # kill -0 69457 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@955 -- # uname 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69457 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.916 killing process with pid 69457 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69457' 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@969 -- # kill 69457 00:05:19.916 10:37:19 rpc -- common/autotest_common.sh@974 -- # wait 69457 00:05:20.177 ************************************ 00:05:20.177 END TEST rpc 00:05:20.177 ************************************ 00:05:20.177 00:05:20.177 real 0m2.291s 00:05:20.177 user 0m2.726s 00:05:20.177 sys 0m0.563s 00:05:20.177 10:37:20 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.177 10:37:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.177 10:37:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:20.177 10:37:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.177 10:37:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.177 10:37:20 -- common/autotest_common.sh@10 -- # set +x 00:05:20.177 ************************************ 00:05:20.177 START TEST skip_rpc 00:05:20.177 ************************************ 00:05:20.177 10:37:20 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:20.437 * Looking for test storage... 00:05:20.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.437 10:37:20 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.437 10:37:20 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.437 10:37:20 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.437 10:37:20 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.437 10:37:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.438 10:37:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.438 10:37:20 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.438 10:37:20 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.438 --rc genhtml_branch_coverage=1 00:05:20.438 --rc genhtml_function_coverage=1 00:05:20.438 --rc genhtml_legend=1 00:05:20.438 --rc geninfo_all_blocks=1 00:05:20.438 --rc geninfo_unexecuted_blocks=1 00:05:20.438 00:05:20.438 ' 00:05:20.438 10:37:20 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.438 --rc genhtml_branch_coverage=1 00:05:20.438 --rc genhtml_function_coverage=1 00:05:20.438 --rc genhtml_legend=1 00:05:20.438 --rc geninfo_all_blocks=1 00:05:20.438 --rc geninfo_unexecuted_blocks=1 00:05:20.438 00:05:20.438 ' 00:05:20.438 10:37:20 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.438 --rc genhtml_branch_coverage=1 00:05:20.438 --rc genhtml_function_coverage=1 00:05:20.438 --rc genhtml_legend=1 00:05:20.438 --rc geninfo_all_blocks=1 00:05:20.438 --rc geninfo_unexecuted_blocks=1 00:05:20.438 00:05:20.438 ' 00:05:20.438 10:37:20 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.438 --rc genhtml_branch_coverage=1 00:05:20.438 --rc genhtml_function_coverage=1 00:05:20.438 --rc genhtml_legend=1 00:05:20.438 --rc geninfo_all_blocks=1 00:05:20.438 --rc geninfo_unexecuted_blocks=1 00:05:20.438 00:05:20.438 ' 00:05:20.438 10:37:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.438 10:37:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:20.438 10:37:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:20.438 10:37:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.438 10:37:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.438 10:37:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.438 ************************************ 00:05:20.438 START TEST skip_rpc 00:05:20.438 ************************************ 00:05:20.438 10:37:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:20.438 10:37:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69653 00:05:20.438 10:37:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.438 10:37:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:20.438 10:37:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:20.438 [2024-12-16 10:37:20.369107] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:20.438 [2024-12-16 10:37:20.369221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69653 ] 00:05:20.698 [2024-12-16 10:37:20.501402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.698 [2024-12-16 10:37:20.533376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69653 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69653 ']' 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69653 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69653 00:05:25.974 killing process with pid 69653 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69653' 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69653 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69653 00:05:25.974 ************************************ 00:05:25.974 END TEST skip_rpc 00:05:25.974 ************************************ 00:05:25.974 00:05:25.974 real 0m5.266s 00:05:25.974 user 0m4.944s 00:05:25.974 sys 0m0.228s 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.974 10:37:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.974 10:37:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:25.974 10:37:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.974 10:37:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.974 10:37:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.974 ************************************ 00:05:25.974 START TEST skip_rpc_with_json 00:05:25.974 ************************************ 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69741 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69741 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69741 ']' 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.974 10:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.974 [2024-12-16 10:37:25.691363] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:25.974 [2024-12-16 10:37:25.691478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69741 ] 00:05:25.974 [2024-12-16 10:37:25.825536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.974 [2024-12-16 10:37:25.854000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.539 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.540 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:26.540 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:26.540 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.540 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.540 [2024-12-16 10:37:26.518944] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:26.540 request: 00:05:26.540 { 00:05:26.540 "trtype": "tcp", 00:05:26.540 "method": "nvmf_get_transports", 00:05:26.540 "req_id": 1 00:05:26.540 } 00:05:26.540 Got JSON-RPC error response 00:05:26.540 response: 00:05:26.540 { 00:05:26.540 "code": -19, 00:05:26.540 "message": "No such device" 00:05:26.540 } 00:05:26.540 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.540 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:26.540 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.540 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.798 [2024-12-16 10:37:26.531037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.798 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.798 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:26.798 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.798 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.798 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.798 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.798 { 00:05:26.798 "subsystems": [ 00:05:26.798 { 00:05:26.798 "subsystem": "fsdev", 00:05:26.798 "config": [ 00:05:26.798 { 00:05:26.798 "method": "fsdev_set_opts", 00:05:26.798 "params": { 00:05:26.798 "fsdev_io_pool_size": 65535, 00:05:26.798 "fsdev_io_cache_size": 256 00:05:26.798 } 00:05:26.798 } 00:05:26.798 ] 00:05:26.798 }, 00:05:26.798 { 00:05:26.798 "subsystem": "keyring", 00:05:26.798 "config": [] 00:05:26.798 }, 00:05:26.798 { 00:05:26.798 "subsystem": "iobuf", 00:05:26.798 "config": [ 00:05:26.798 { 00:05:26.798 "method": "iobuf_set_options", 00:05:26.798 "params": { 00:05:26.798 "small_pool_count": 8192, 00:05:26.798 "large_pool_count": 1024, 00:05:26.798 "small_bufsize": 8192, 00:05:26.798 "large_bufsize": 135168 00:05:26.798 } 00:05:26.798 } 00:05:26.798 ] 00:05:26.798 }, 00:05:26.798 { 00:05:26.798 "subsystem": "sock", 00:05:26.798 "config": [ 00:05:26.798 { 00:05:26.798 "method": "sock_set_default_impl", 00:05:26.798 "params": { 00:05:26.798 "impl_name": "posix" 00:05:26.798 } 00:05:26.798 }, 00:05:26.798 { 00:05:26.798 "method": "sock_impl_set_options", 00:05:26.798 "params": { 00:05:26.798 "impl_name": "ssl", 00:05:26.798 "recv_buf_size": 4096, 00:05:26.798 "send_buf_size": 4096, 00:05:26.798 "enable_recv_pipe": true, 00:05:26.798 "enable_quickack": false, 00:05:26.798 "enable_placement_id": 0, 00:05:26.798 "enable_zerocopy_send_server": true, 00:05:26.798 "enable_zerocopy_send_client": false, 00:05:26.798 "zerocopy_threshold": 0, 00:05:26.798 "tls_version": 0, 00:05:26.798 "enable_ktls": false 00:05:26.798 } 00:05:26.798 }, 00:05:26.798 { 00:05:26.798 "method": "sock_impl_set_options", 00:05:26.798 "params": { 00:05:26.798 "impl_name": "posix", 00:05:26.798 "recv_buf_size": 2097152, 00:05:26.798 "send_buf_size": 2097152, 00:05:26.798 "enable_recv_pipe": true, 00:05:26.798 "enable_quickack": false, 00:05:26.798 "enable_placement_id": 0, 00:05:26.798 "enable_zerocopy_send_server": true, 00:05:26.798 "enable_zerocopy_send_client": false, 00:05:26.798 "zerocopy_threshold": 0, 00:05:26.798 "tls_version": 0, 00:05:26.798 "enable_ktls": false 00:05:26.798 } 00:05:26.798 } 00:05:26.798 ] 00:05:26.798 }, 00:05:26.798 { 00:05:26.798 "subsystem": "vmd", 00:05:26.798 "config": [] 00:05:26.798 }, 00:05:26.798 { 00:05:26.798 "subsystem": "accel", 00:05:26.798 "config": [ 00:05:26.798 { 00:05:26.798 "method": "accel_set_options", 00:05:26.798 "params": { 00:05:26.798 "small_cache_size": 128, 00:05:26.798 "large_cache_size": 16, 00:05:26.798 "task_count": 2048, 00:05:26.798 "sequence_count": 2048, 00:05:26.798 "buf_count": 2048 00:05:26.798 } 00:05:26.798 } 00:05:26.798 ] 00:05:26.798 }, 00:05:26.798 { 00:05:26.798 "subsystem": "bdev", 00:05:26.798 "config": [ 00:05:26.799 { 00:05:26.799 "method": "bdev_set_options", 00:05:26.799 "params": { 00:05:26.799 "bdev_io_pool_size": 65535, 00:05:26.799 "bdev_io_cache_size": 256, 00:05:26.799 "bdev_auto_examine": true, 00:05:26.799 "iobuf_small_cache_size": 128, 00:05:26.799 "iobuf_large_cache_size": 16 00:05:26.799 } 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "method": "bdev_raid_set_options", 00:05:26.799 "params": { 00:05:26.799 "process_window_size_kb": 1024, 00:05:26.799 "process_max_bandwidth_mb_sec": 0 00:05:26.799 } 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "method": "bdev_iscsi_set_options", 00:05:26.799 "params": { 00:05:26.799 "timeout_sec": 30 00:05:26.799 } 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "method": "bdev_nvme_set_options", 00:05:26.799 "params": { 00:05:26.799 "action_on_timeout": "none", 00:05:26.799 "timeout_us": 0, 00:05:26.799 "timeout_admin_us": 0, 00:05:26.799 "keep_alive_timeout_ms": 10000, 00:05:26.799 "arbitration_burst": 0, 00:05:26.799 "low_priority_weight": 0, 00:05:26.799 "medium_priority_weight": 0, 00:05:26.799 "high_priority_weight": 0, 00:05:26.799 "nvme_adminq_poll_period_us": 10000, 00:05:26.799 "nvme_ioq_poll_period_us": 0, 00:05:26.799 "io_queue_requests": 0, 00:05:26.799 "delay_cmd_submit": true, 00:05:26.799 "transport_retry_count": 4, 00:05:26.799 "bdev_retry_count": 3, 00:05:26.799 "transport_ack_timeout": 0, 00:05:26.799 "ctrlr_loss_timeout_sec": 0, 00:05:26.799 "reconnect_delay_sec": 0, 00:05:26.799 "fast_io_fail_timeout_sec": 0, 00:05:26.799 "disable_auto_failback": false, 00:05:26.799 "generate_uuids": false, 00:05:26.799 "transport_tos": 0, 00:05:26.799 "nvme_error_stat": false, 00:05:26.799 "rdma_srq_size": 0, 00:05:26.799 "io_path_stat": false, 00:05:26.799 "allow_accel_sequence": false, 00:05:26.799 "rdma_max_cq_size": 0, 00:05:26.799 "rdma_cm_event_timeout_ms": 0, 00:05:26.799 "dhchap_digests": [ 00:05:26.799 "sha256", 00:05:26.799 "sha384", 00:05:26.799 "sha512" 00:05:26.799 ], 00:05:26.799 "dhchap_dhgroups": [ 00:05:26.799 "null", 00:05:26.799 "ffdhe2048", 00:05:26.799 "ffdhe3072", 00:05:26.799 "ffdhe4096", 00:05:26.799 "ffdhe6144", 00:05:26.799 "ffdhe8192" 00:05:26.799 ] 00:05:26.799 } 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "method": "bdev_nvme_set_hotplug", 00:05:26.799 "params": { 00:05:26.799 "period_us": 100000, 00:05:26.799 "enable": false 00:05:26.799 } 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "method": "bdev_wait_for_examine" 00:05:26.799 } 00:05:26.799 ] 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "subsystem": "scsi", 00:05:26.799 "config": null 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "subsystem": "scheduler", 00:05:26.799 "config": [ 00:05:26.799 { 00:05:26.799 "method": "framework_set_scheduler", 00:05:26.799 "params": { 00:05:26.799 "name": "static" 00:05:26.799 } 00:05:26.799 } 00:05:26.799 ] 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "subsystem": "vhost_scsi", 00:05:26.799 "config": [] 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "subsystem": "vhost_blk", 00:05:26.799 "config": [] 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "subsystem": "ublk", 00:05:26.799 "config": [] 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "subsystem": "nbd", 00:05:26.799 "config": [] 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "subsystem": "nvmf", 00:05:26.799 "config": [ 00:05:26.799 { 00:05:26.799 "method": "nvmf_set_config", 00:05:26.799 "params": { 00:05:26.799 "discovery_filter": "match_any", 00:05:26.799 "admin_cmd_passthru": { 00:05:26.799 "identify_ctrlr": false 00:05:26.799 }, 00:05:26.799 "dhchap_digests": [ 00:05:26.799 "sha256", 00:05:26.799 "sha384", 00:05:26.799 "sha512" 00:05:26.799 ], 00:05:26.799 "dhchap_dhgroups": [ 00:05:26.799 "null", 00:05:26.799 "ffdhe2048", 00:05:26.799 "ffdhe3072", 00:05:26.799 "ffdhe4096", 00:05:26.799 "ffdhe6144", 00:05:26.799 "ffdhe8192" 00:05:26.799 ] 00:05:26.799 } 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "method": "nvmf_set_max_subsystems", 00:05:26.799 "params": { 00:05:26.799 "max_subsystems": 1024 00:05:26.799 } 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "method": "nvmf_set_crdt", 00:05:26.799 "params": { 00:05:26.799 "crdt1": 0, 00:05:26.799 "crdt2": 0, 00:05:26.799 "crdt3": 0 00:05:26.799 } 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "method": "nvmf_create_transport", 00:05:26.799 "params": { 00:05:26.799 "trtype": "TCP", 00:05:26.799 "max_queue_depth": 128, 00:05:26.799 "max_io_qpairs_per_ctrlr": 127, 00:05:26.799 "in_capsule_data_size": 4096, 00:05:26.799 "max_io_size": 131072, 00:05:26.799 "io_unit_size": 131072, 00:05:26.799 "max_aq_depth": 128, 00:05:26.799 "num_shared_buffers": 511, 00:05:26.799 "buf_cache_size": 4294967295, 00:05:26.799 "dif_insert_or_strip": false, 00:05:26.799 "zcopy": false, 00:05:26.799 "c2h_success": true, 00:05:26.799 "sock_priority": 0, 00:05:26.799 "abort_timeout_sec": 1, 00:05:26.799 "ack_timeout": 0, 00:05:26.799 "data_wr_pool_size": 0 00:05:26.799 } 00:05:26.799 } 00:05:26.799 ] 00:05:26.799 }, 00:05:26.799 { 00:05:26.799 "subsystem": "iscsi", 00:05:26.799 "config": [ 00:05:26.799 { 00:05:26.799 "method": "iscsi_set_options", 00:05:26.799 "params": { 00:05:26.799 "node_base": "iqn.2016-06.io.spdk", 00:05:26.799 "max_sessions": 128, 00:05:26.799 "max_connections_per_session": 2, 00:05:26.799 "max_queue_depth": 64, 00:05:26.799 "default_time2wait": 2, 00:05:26.799 "default_time2retain": 20, 00:05:26.799 "first_burst_length": 8192, 00:05:26.799 "immediate_data": true, 00:05:26.799 "allow_duplicated_isid": false, 00:05:26.799 "error_recovery_level": 0, 00:05:26.799 "nop_timeout": 60, 00:05:26.799 "nop_in_interval": 30, 00:05:26.799 "disable_chap": false, 00:05:26.799 "require_chap": false, 00:05:26.799 "mutual_chap": false, 00:05:26.799 "chap_group": 0, 00:05:26.799 "max_large_datain_per_connection": 64, 00:05:26.799 "max_r2t_per_connection": 4, 00:05:26.799 "pdu_pool_size": 36864, 00:05:26.799 "immediate_data_pool_size": 16384, 00:05:26.799 "data_out_pool_size": 2048 00:05:26.799 } 00:05:26.799 } 00:05:26.799 ] 00:05:26.799 } 00:05:26.799 ] 00:05:26.799 } 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69741 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69741 ']' 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69741 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69741 00:05:26.799 killing process with pid 69741 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69741' 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69741 00:05:26.799 10:37:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69741 00:05:27.060 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69768 00:05:27.060 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:27.060 10:37:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69768 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69768 ']' 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69768 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69768 00:05:32.324 killing process with pid 69768 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.324 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69768' 00:05:32.325 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69768 00:05:32.325 10:37:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69768 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:32.325 ************************************ 00:05:32.325 END TEST skip_rpc_with_json 00:05:32.325 ************************************ 00:05:32.325 00:05:32.325 real 0m6.579s 00:05:32.325 user 0m6.270s 00:05:32.325 sys 0m0.518s 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.325 10:37:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:32.325 10:37:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.325 10:37:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.325 10:37:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.325 ************************************ 00:05:32.325 START TEST skip_rpc_with_delay 00:05:32.325 ************************************ 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:32.325 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.325 [2024-12-16 10:37:32.304660] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:32.325 [2024-12-16 10:37:32.304780] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:32.585 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:32.585 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.585 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.585 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.585 00:05:32.585 real 0m0.110s 00:05:32.585 user 0m0.056s 00:05:32.585 sys 0m0.052s 00:05:32.585 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.586 ************************************ 00:05:32.586 END TEST skip_rpc_with_delay 00:05:32.586 ************************************ 00:05:32.586 10:37:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:32.586 10:37:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:32.586 10:37:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:32.586 10:37:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:32.586 10:37:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.586 10:37:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.586 10:37:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.586 ************************************ 00:05:32.586 START TEST exit_on_failed_rpc_init 00:05:32.586 ************************************ 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69875 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69875 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69875 ']' 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.586 10:37:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.586 [2024-12-16 10:37:32.458976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:32.586 [2024-12-16 10:37:32.459234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69875 ] 00:05:32.845 [2024-12-16 10:37:32.594566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.845 [2024-12-16 10:37:32.622398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:33.412 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.412 [2024-12-16 10:37:33.362183] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:33.412 [2024-12-16 10:37:33.362482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69893 ] 00:05:33.671 [2024-12-16 10:37:33.498201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.671 [2024-12-16 10:37:33.529653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.671 [2024-12-16 10:37:33.529894] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:33.671 [2024-12-16 10:37:33.530297] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:33.671 [2024-12-16 10:37:33.530381] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69875 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69875 ']' 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69875 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69875 00:05:33.671 killing process with pid 69875 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69875' 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69875 00:05:33.671 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69875 00:05:33.930 00:05:33.930 real 0m1.474s 00:05:33.930 user 0m1.657s 00:05:33.930 sys 0m0.337s 00:05:33.930 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.930 ************************************ 00:05:33.930 END TEST exit_on_failed_rpc_init 00:05:33.930 ************************************ 00:05:33.931 10:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.931 10:37:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.931 ************************************ 00:05:33.931 END TEST skip_rpc 00:05:33.931 ************************************ 00:05:33.931 00:05:33.931 real 0m13.759s 00:05:33.931 user 0m13.071s 00:05:33.931 sys 0m1.295s 00:05:33.931 10:37:33 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.931 10:37:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.193 10:37:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.193 10:37:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.193 10:37:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.193 10:37:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.193 ************************************ 00:05:34.193 START TEST rpc_client 00:05:34.193 ************************************ 00:05:34.193 10:37:33 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.193 * Looking for test storage... 00:05:34.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.193 10:37:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.193 --rc genhtml_branch_coverage=1 00:05:34.193 --rc genhtml_function_coverage=1 00:05:34.193 --rc genhtml_legend=1 00:05:34.193 --rc geninfo_all_blocks=1 00:05:34.193 --rc geninfo_unexecuted_blocks=1 00:05:34.193 00:05:34.193 ' 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.193 --rc genhtml_branch_coverage=1 00:05:34.193 --rc genhtml_function_coverage=1 00:05:34.193 --rc genhtml_legend=1 00:05:34.193 --rc geninfo_all_blocks=1 00:05:34.193 --rc geninfo_unexecuted_blocks=1 00:05:34.193 00:05:34.193 ' 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.193 --rc genhtml_branch_coverage=1 00:05:34.193 --rc genhtml_function_coverage=1 00:05:34.193 --rc genhtml_legend=1 00:05:34.193 --rc geninfo_all_blocks=1 00:05:34.193 --rc geninfo_unexecuted_blocks=1 00:05:34.193 00:05:34.193 ' 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.193 --rc genhtml_branch_coverage=1 00:05:34.193 --rc genhtml_function_coverage=1 00:05:34.193 --rc genhtml_legend=1 00:05:34.193 --rc geninfo_all_blocks=1 00:05:34.193 --rc geninfo_unexecuted_blocks=1 00:05:34.193 00:05:34.193 ' 00:05:34.193 10:37:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:34.193 OK 00:05:34.193 10:37:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.193 00:05:34.193 real 0m0.183s 00:05:34.193 user 0m0.114s 00:05:34.193 sys 0m0.075s 00:05:34.193 ************************************ 00:05:34.193 END TEST rpc_client 00:05:34.193 ************************************ 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.193 10:37:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:34.193 10:37:34 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.193 10:37:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.455 10:37:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.455 10:37:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.455 ************************************ 00:05:34.455 START TEST json_config 00:05:34.455 ************************************ 00:05:34.455 10:37:34 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.455 10:37:34 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.455 10:37:34 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.455 10:37:34 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.455 10:37:34 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.455 10:37:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.455 10:37:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.455 10:37:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.455 10:37:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.455 10:37:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.455 10:37:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.456 10:37:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.456 10:37:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.456 10:37:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.456 10:37:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.456 10:37:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.456 10:37:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:34.456 10:37:34 json_config -- scripts/common.sh@345 -- # : 1 00:05:34.456 10:37:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.456 10:37:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.456 10:37:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:34.456 10:37:34 json_config -- scripts/common.sh@353 -- # local d=1 00:05:34.456 10:37:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.456 10:37:34 json_config -- scripts/common.sh@355 -- # echo 1 00:05:34.456 10:37:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.456 10:37:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:34.456 10:37:34 json_config -- scripts/common.sh@353 -- # local d=2 00:05:34.456 10:37:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.456 10:37:34 json_config -- scripts/common.sh@355 -- # echo 2 00:05:34.456 10:37:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.456 10:37:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.456 10:37:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.456 10:37:34 json_config -- scripts/common.sh@368 -- # return 0 00:05:34.456 10:37:34 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.456 10:37:34 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.456 --rc genhtml_branch_coverage=1 00:05:34.456 --rc genhtml_function_coverage=1 00:05:34.456 --rc genhtml_legend=1 00:05:34.456 --rc geninfo_all_blocks=1 00:05:34.456 --rc geninfo_unexecuted_blocks=1 00:05:34.456 00:05:34.456 ' 00:05:34.456 10:37:34 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.456 --rc genhtml_branch_coverage=1 00:05:34.456 --rc genhtml_function_coverage=1 00:05:34.456 --rc genhtml_legend=1 00:05:34.456 --rc geninfo_all_blocks=1 00:05:34.456 --rc geninfo_unexecuted_blocks=1 00:05:34.456 00:05:34.456 ' 00:05:34.456 10:37:34 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.456 --rc genhtml_branch_coverage=1 00:05:34.456 --rc genhtml_function_coverage=1 00:05:34.456 --rc genhtml_legend=1 00:05:34.456 --rc geninfo_all_blocks=1 00:05:34.456 --rc geninfo_unexecuted_blocks=1 00:05:34.456 00:05:34.456 ' 00:05:34.456 10:37:34 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.456 --rc genhtml_branch_coverage=1 00:05:34.456 --rc genhtml_function_coverage=1 00:05:34.456 --rc genhtml_legend=1 00:05:34.456 --rc geninfo_all_blocks=1 00:05:34.456 --rc geninfo_unexecuted_blocks=1 00:05:34.456 00:05:34.456 ' 00:05:34.456 10:37:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f1bae93-a412-4c9d-ba56-7b4de9b8b370 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2f1bae93-a412-4c9d-ba56-7b4de9b8b370 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.456 10:37:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.456 10:37:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.456 10:37:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.456 10:37:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.456 10:37:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.456 10:37:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.456 10:37:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.456 10:37:34 json_config -- paths/export.sh@5 -- # export PATH 00:05:34.456 10:37:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@51 -- # : 0 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.456 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.456 10:37:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.456 10:37:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:34.456 10:37:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:34.456 10:37:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:34.456 10:37:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:34.456 10:37:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:34.456 10:37:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:34.456 WARNING: No tests are enabled so not running JSON configuration tests 00:05:34.456 10:37:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:34.456 00:05:34.456 real 0m0.144s 00:05:34.456 user 0m0.092s 00:05:34.456 sys 0m0.053s 00:05:34.456 10:37:34 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.456 10:37:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.456 ************************************ 00:05:34.456 END TEST json_config 00:05:34.456 ************************************ 00:05:34.456 10:37:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.456 10:37:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.456 10:37:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.456 10:37:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.456 ************************************ 00:05:34.456 START TEST json_config_extra_key 00:05:34.456 ************************************ 00:05:34.456 10:37:34 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.456 10:37:34 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.456 10:37:34 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.456 10:37:34 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.717 10:37:34 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.717 10:37:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.718 --rc genhtml_branch_coverage=1 00:05:34.718 --rc genhtml_function_coverage=1 00:05:34.718 --rc genhtml_legend=1 00:05:34.718 --rc geninfo_all_blocks=1 00:05:34.718 --rc geninfo_unexecuted_blocks=1 00:05:34.718 00:05:34.718 ' 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.718 --rc genhtml_branch_coverage=1 00:05:34.718 --rc genhtml_function_coverage=1 00:05:34.718 --rc genhtml_legend=1 00:05:34.718 --rc geninfo_all_blocks=1 00:05:34.718 --rc geninfo_unexecuted_blocks=1 00:05:34.718 00:05:34.718 ' 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.718 --rc genhtml_branch_coverage=1 00:05:34.718 --rc genhtml_function_coverage=1 00:05:34.718 --rc genhtml_legend=1 00:05:34.718 --rc geninfo_all_blocks=1 00:05:34.718 --rc geninfo_unexecuted_blocks=1 00:05:34.718 00:05:34.718 ' 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.718 --rc genhtml_branch_coverage=1 00:05:34.718 --rc genhtml_function_coverage=1 00:05:34.718 --rc genhtml_legend=1 00:05:34.718 --rc geninfo_all_blocks=1 00:05:34.718 --rc geninfo_unexecuted_blocks=1 00:05:34.718 00:05:34.718 ' 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f1bae93-a412-4c9d-ba56-7b4de9b8b370 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2f1bae93-a412-4c9d-ba56-7b4de9b8b370 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.718 10:37:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.718 10:37:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.718 10:37:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.718 10:37:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.718 10:37:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:34.718 10:37:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.718 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.718 10:37:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.718 INFO: launching applications... 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:34.718 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70070 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.718 Waiting for target to run... 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70070 /var/tmp/spdk_tgt.sock 00:05:34.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70070 ']' 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.718 10:37:34 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.718 10:37:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.719 10:37:34 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.719 10:37:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.719 [2024-12-16 10:37:34.605987] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:34.719 [2024-12-16 10:37:34.606109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70070 ] 00:05:34.978 [2024-12-16 10:37:34.902745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.978 [2024-12-16 10:37:34.918084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.545 00:05:35.545 INFO: shutting down applications... 00:05:35.545 10:37:35 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.545 10:37:35 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:35.545 10:37:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.545 10:37:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70070 ]] 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70070 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70070 00:05:35.545 10:37:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.117 10:37:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.117 10:37:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.117 10:37:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70070 00:05:36.118 10:37:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.118 10:37:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:36.118 10:37:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.118 10:37:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.118 SPDK target shutdown done 00:05:36.118 10:37:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:36.118 Success 00:05:36.118 00:05:36.118 real 0m1.559s 00:05:36.118 user 0m1.216s 00:05:36.118 sys 0m0.346s 00:05:36.118 10:37:35 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.118 10:37:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.118 ************************************ 00:05:36.118 END TEST json_config_extra_key 00:05:36.118 ************************************ 00:05:36.118 10:37:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.118 10:37:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.118 10:37:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.118 10:37:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.118 ************************************ 00:05:36.118 START TEST alias_rpc 00:05:36.118 ************************************ 00:05:36.118 10:37:36 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.118 * Looking for test storage... 00:05:36.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:36.118 10:37:36 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.118 10:37:36 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.118 10:37:36 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.377 10:37:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.377 --rc genhtml_branch_coverage=1 00:05:36.377 --rc genhtml_function_coverage=1 00:05:36.377 --rc genhtml_legend=1 00:05:36.377 --rc geninfo_all_blocks=1 00:05:36.377 --rc geninfo_unexecuted_blocks=1 00:05:36.377 00:05:36.377 ' 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.377 --rc genhtml_branch_coverage=1 00:05:36.377 --rc genhtml_function_coverage=1 00:05:36.377 --rc genhtml_legend=1 00:05:36.377 --rc geninfo_all_blocks=1 00:05:36.377 --rc geninfo_unexecuted_blocks=1 00:05:36.377 00:05:36.377 ' 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.377 --rc genhtml_branch_coverage=1 00:05:36.377 --rc genhtml_function_coverage=1 00:05:36.377 --rc genhtml_legend=1 00:05:36.377 --rc geninfo_all_blocks=1 00:05:36.377 --rc geninfo_unexecuted_blocks=1 00:05:36.377 00:05:36.377 ' 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.377 --rc genhtml_branch_coverage=1 00:05:36.377 --rc genhtml_function_coverage=1 00:05:36.377 --rc genhtml_legend=1 00:05:36.377 --rc geninfo_all_blocks=1 00:05:36.377 --rc geninfo_unexecuted_blocks=1 00:05:36.377 00:05:36.377 ' 00:05:36.377 10:37:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.377 10:37:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70144 00:05:36.377 10:37:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70144 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70144 ']' 00:05:36.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.377 10:37:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.377 10:37:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.378 [2024-12-16 10:37:36.228537] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:36.378 [2024-12-16 10:37:36.229054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70144 ] 00:05:36.378 [2024-12-16 10:37:36.363041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.635 [2024-12-16 10:37:36.390846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.201 10:37:37 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.201 10:37:37 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:37.201 10:37:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:37.461 10:37:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70144 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70144 ']' 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70144 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70144 00:05:37.461 killing process with pid 70144 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70144' 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@969 -- # kill 70144 00:05:37.461 10:37:37 alias_rpc -- common/autotest_common.sh@974 -- # wait 70144 00:05:37.721 ************************************ 00:05:37.722 END TEST alias_rpc 00:05:37.722 ************************************ 00:05:37.722 00:05:37.722 real 0m1.530s 00:05:37.722 user 0m1.707s 00:05:37.722 sys 0m0.320s 00:05:37.722 10:37:37 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.722 10:37:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.722 10:37:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:37.722 10:37:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:37.722 10:37:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.722 10:37:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.722 10:37:37 -- common/autotest_common.sh@10 -- # set +x 00:05:37.722 ************************************ 00:05:37.722 START TEST spdkcli_tcp 00:05:37.722 ************************************ 00:05:37.722 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:37.722 * Looking for test storage... 00:05:37.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:37.722 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.722 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.722 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.985 10:37:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.985 --rc genhtml_branch_coverage=1 00:05:37.985 --rc genhtml_function_coverage=1 00:05:37.985 --rc genhtml_legend=1 00:05:37.985 --rc geninfo_all_blocks=1 00:05:37.985 --rc geninfo_unexecuted_blocks=1 00:05:37.985 00:05:37.985 ' 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.985 --rc genhtml_branch_coverage=1 00:05:37.985 --rc genhtml_function_coverage=1 00:05:37.985 --rc genhtml_legend=1 00:05:37.985 --rc geninfo_all_blocks=1 00:05:37.985 --rc geninfo_unexecuted_blocks=1 00:05:37.985 00:05:37.985 ' 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.985 --rc genhtml_branch_coverage=1 00:05:37.985 --rc genhtml_function_coverage=1 00:05:37.985 --rc genhtml_legend=1 00:05:37.985 --rc geninfo_all_blocks=1 00:05:37.985 --rc geninfo_unexecuted_blocks=1 00:05:37.985 00:05:37.985 ' 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.985 --rc genhtml_branch_coverage=1 00:05:37.985 --rc genhtml_function_coverage=1 00:05:37.985 --rc genhtml_legend=1 00:05:37.985 --rc geninfo_all_blocks=1 00:05:37.985 --rc geninfo_unexecuted_blocks=1 00:05:37.985 00:05:37.985 ' 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70223 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70223 00:05:37.985 10:37:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70223 ']' 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.985 10:37:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.985 [2024-12-16 10:37:37.792459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:37.985 [2024-12-16 10:37:37.792555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70223 ] 00:05:37.985 [2024-12-16 10:37:37.925667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.985 [2024-12-16 10:37:37.964121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.985 [2024-12-16 10:37:37.964171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.929 10:37:38 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.929 10:37:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:38.929 10:37:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:38.929 10:37:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70240 00:05:38.929 10:37:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:38.929 [ 00:05:38.929 "bdev_malloc_delete", 00:05:38.929 "bdev_malloc_create", 00:05:38.929 "bdev_null_resize", 00:05:38.929 "bdev_null_delete", 00:05:38.929 "bdev_null_create", 00:05:38.929 "bdev_nvme_cuse_unregister", 00:05:38.929 "bdev_nvme_cuse_register", 00:05:38.929 "bdev_opal_new_user", 00:05:38.929 "bdev_opal_set_lock_state", 00:05:38.929 "bdev_opal_delete", 00:05:38.929 "bdev_opal_get_info", 00:05:38.929 "bdev_opal_create", 00:05:38.929 "bdev_nvme_opal_revert", 00:05:38.929 "bdev_nvme_opal_init", 00:05:38.929 "bdev_nvme_send_cmd", 00:05:38.929 "bdev_nvme_set_keys", 00:05:38.929 "bdev_nvme_get_path_iostat", 00:05:38.929 "bdev_nvme_get_mdns_discovery_info", 00:05:38.929 "bdev_nvme_stop_mdns_discovery", 00:05:38.929 "bdev_nvme_start_mdns_discovery", 00:05:38.929 "bdev_nvme_set_multipath_policy", 00:05:38.929 "bdev_nvme_set_preferred_path", 00:05:38.929 "bdev_nvme_get_io_paths", 00:05:38.929 "bdev_nvme_remove_error_injection", 00:05:38.929 "bdev_nvme_add_error_injection", 00:05:38.929 "bdev_nvme_get_discovery_info", 00:05:38.929 "bdev_nvme_stop_discovery", 00:05:38.929 "bdev_nvme_start_discovery", 00:05:38.929 "bdev_nvme_get_controller_health_info", 00:05:38.929 "bdev_nvme_disable_controller", 00:05:38.930 "bdev_nvme_enable_controller", 00:05:38.930 "bdev_nvme_reset_controller", 00:05:38.930 "bdev_nvme_get_transport_statistics", 00:05:38.930 "bdev_nvme_apply_firmware", 00:05:38.930 "bdev_nvme_detach_controller", 00:05:38.930 "bdev_nvme_get_controllers", 00:05:38.930 "bdev_nvme_attach_controller", 00:05:38.930 "bdev_nvme_set_hotplug", 00:05:38.930 "bdev_nvme_set_options", 00:05:38.930 "bdev_passthru_delete", 00:05:38.930 "bdev_passthru_create", 00:05:38.930 "bdev_lvol_set_parent_bdev", 00:05:38.930 "bdev_lvol_set_parent", 00:05:38.930 "bdev_lvol_check_shallow_copy", 00:05:38.930 "bdev_lvol_start_shallow_copy", 00:05:38.930 "bdev_lvol_grow_lvstore", 00:05:38.930 "bdev_lvol_get_lvols", 00:05:38.930 "bdev_lvol_get_lvstores", 00:05:38.930 "bdev_lvol_delete", 00:05:38.930 "bdev_lvol_set_read_only", 00:05:38.930 "bdev_lvol_resize", 00:05:38.930 "bdev_lvol_decouple_parent", 00:05:38.930 "bdev_lvol_inflate", 00:05:38.930 "bdev_lvol_rename", 00:05:38.930 "bdev_lvol_clone_bdev", 00:05:38.930 "bdev_lvol_clone", 00:05:38.930 "bdev_lvol_snapshot", 00:05:38.930 "bdev_lvol_create", 00:05:38.930 "bdev_lvol_delete_lvstore", 00:05:38.930 "bdev_lvol_rename_lvstore", 00:05:38.930 "bdev_lvol_create_lvstore", 00:05:38.930 "bdev_raid_set_options", 00:05:38.930 "bdev_raid_remove_base_bdev", 00:05:38.930 "bdev_raid_add_base_bdev", 00:05:38.930 "bdev_raid_delete", 00:05:38.930 "bdev_raid_create", 00:05:38.930 "bdev_raid_get_bdevs", 00:05:38.930 "bdev_error_inject_error", 00:05:38.930 "bdev_error_delete", 00:05:38.930 "bdev_error_create", 00:05:38.930 "bdev_split_delete", 00:05:38.930 "bdev_split_create", 00:05:38.930 "bdev_delay_delete", 00:05:38.930 "bdev_delay_create", 00:05:38.930 "bdev_delay_update_latency", 00:05:38.930 "bdev_zone_block_delete", 00:05:38.930 "bdev_zone_block_create", 00:05:38.930 "blobfs_create", 00:05:38.930 "blobfs_detect", 00:05:38.930 "blobfs_set_cache_size", 00:05:38.930 "bdev_xnvme_delete", 00:05:38.930 "bdev_xnvme_create", 00:05:38.930 "bdev_aio_delete", 00:05:38.930 "bdev_aio_rescan", 00:05:38.930 "bdev_aio_create", 00:05:38.930 "bdev_ftl_set_property", 00:05:38.930 "bdev_ftl_get_properties", 00:05:38.930 "bdev_ftl_get_stats", 00:05:38.930 "bdev_ftl_unmap", 00:05:38.930 "bdev_ftl_unload", 00:05:38.930 "bdev_ftl_delete", 00:05:38.930 "bdev_ftl_load", 00:05:38.930 "bdev_ftl_create", 00:05:38.930 "bdev_virtio_attach_controller", 00:05:38.930 "bdev_virtio_scsi_get_devices", 00:05:38.930 "bdev_virtio_detach_controller", 00:05:38.930 "bdev_virtio_blk_set_hotplug", 00:05:38.930 "bdev_iscsi_delete", 00:05:38.930 "bdev_iscsi_create", 00:05:38.930 "bdev_iscsi_set_options", 00:05:38.930 "accel_error_inject_error", 00:05:38.930 "ioat_scan_accel_module", 00:05:38.930 "dsa_scan_accel_module", 00:05:38.930 "iaa_scan_accel_module", 00:05:38.930 "keyring_file_remove_key", 00:05:38.930 "keyring_file_add_key", 00:05:38.930 "keyring_linux_set_options", 00:05:38.930 "fsdev_aio_delete", 00:05:38.930 "fsdev_aio_create", 00:05:38.930 "iscsi_get_histogram", 00:05:38.930 "iscsi_enable_histogram", 00:05:38.930 "iscsi_set_options", 00:05:38.930 "iscsi_get_auth_groups", 00:05:38.930 "iscsi_auth_group_remove_secret", 00:05:38.930 "iscsi_auth_group_add_secret", 00:05:38.930 "iscsi_delete_auth_group", 00:05:38.930 "iscsi_create_auth_group", 00:05:38.930 "iscsi_set_discovery_auth", 00:05:38.930 "iscsi_get_options", 00:05:38.930 "iscsi_target_node_request_logout", 00:05:38.930 "iscsi_target_node_set_redirect", 00:05:38.930 "iscsi_target_node_set_auth", 00:05:38.930 "iscsi_target_node_add_lun", 00:05:38.930 "iscsi_get_stats", 00:05:38.930 "iscsi_get_connections", 00:05:38.930 "iscsi_portal_group_set_auth", 00:05:38.930 "iscsi_start_portal_group", 00:05:38.930 "iscsi_delete_portal_group", 00:05:38.930 "iscsi_create_portal_group", 00:05:38.930 "iscsi_get_portal_groups", 00:05:38.930 "iscsi_delete_target_node", 00:05:38.930 "iscsi_target_node_remove_pg_ig_maps", 00:05:38.930 "iscsi_target_node_add_pg_ig_maps", 00:05:38.930 "iscsi_create_target_node", 00:05:38.930 "iscsi_get_target_nodes", 00:05:38.930 "iscsi_delete_initiator_group", 00:05:38.930 "iscsi_initiator_group_remove_initiators", 00:05:38.930 "iscsi_initiator_group_add_initiators", 00:05:38.930 "iscsi_create_initiator_group", 00:05:38.930 "iscsi_get_initiator_groups", 00:05:38.930 "nvmf_set_crdt", 00:05:38.930 "nvmf_set_config", 00:05:38.930 "nvmf_set_max_subsystems", 00:05:38.930 "nvmf_stop_mdns_prr", 00:05:38.930 "nvmf_publish_mdns_prr", 00:05:38.930 "nvmf_subsystem_get_listeners", 00:05:38.930 "nvmf_subsystem_get_qpairs", 00:05:38.930 "nvmf_subsystem_get_controllers", 00:05:38.930 "nvmf_get_stats", 00:05:38.930 "nvmf_get_transports", 00:05:38.930 "nvmf_create_transport", 00:05:38.930 "nvmf_get_targets", 00:05:38.930 "nvmf_delete_target", 00:05:38.930 "nvmf_create_target", 00:05:38.930 "nvmf_subsystem_allow_any_host", 00:05:38.930 "nvmf_subsystem_set_keys", 00:05:38.930 "nvmf_subsystem_remove_host", 00:05:38.930 "nvmf_subsystem_add_host", 00:05:38.930 "nvmf_ns_remove_host", 00:05:38.930 "nvmf_ns_add_host", 00:05:38.930 "nvmf_subsystem_remove_ns", 00:05:38.930 "nvmf_subsystem_set_ns_ana_group", 00:05:38.930 "nvmf_subsystem_add_ns", 00:05:38.930 "nvmf_subsystem_listener_set_ana_state", 00:05:38.930 "nvmf_discovery_get_referrals", 00:05:38.930 "nvmf_discovery_remove_referral", 00:05:38.930 "nvmf_discovery_add_referral", 00:05:38.930 "nvmf_subsystem_remove_listener", 00:05:38.930 "nvmf_subsystem_add_listener", 00:05:38.930 "nvmf_delete_subsystem", 00:05:38.930 "nvmf_create_subsystem", 00:05:38.930 "nvmf_get_subsystems", 00:05:38.930 "env_dpdk_get_mem_stats", 00:05:38.930 "nbd_get_disks", 00:05:38.930 "nbd_stop_disk", 00:05:38.930 "nbd_start_disk", 00:05:38.930 "ublk_recover_disk", 00:05:38.930 "ublk_get_disks", 00:05:38.930 "ublk_stop_disk", 00:05:38.930 "ublk_start_disk", 00:05:38.930 "ublk_destroy_target", 00:05:38.930 "ublk_create_target", 00:05:38.930 "virtio_blk_create_transport", 00:05:38.930 "virtio_blk_get_transports", 00:05:38.930 "vhost_controller_set_coalescing", 00:05:38.930 "vhost_get_controllers", 00:05:38.930 "vhost_delete_controller", 00:05:38.930 "vhost_create_blk_controller", 00:05:38.930 "vhost_scsi_controller_remove_target", 00:05:38.930 "vhost_scsi_controller_add_target", 00:05:38.930 "vhost_start_scsi_controller", 00:05:38.930 "vhost_create_scsi_controller", 00:05:38.930 "thread_set_cpumask", 00:05:38.930 "scheduler_set_options", 00:05:38.930 "framework_get_governor", 00:05:38.930 "framework_get_scheduler", 00:05:38.930 "framework_set_scheduler", 00:05:38.930 "framework_get_reactors", 00:05:38.930 "thread_get_io_channels", 00:05:38.930 "thread_get_pollers", 00:05:38.930 "thread_get_stats", 00:05:38.930 "framework_monitor_context_switch", 00:05:38.930 "spdk_kill_instance", 00:05:38.930 "log_enable_timestamps", 00:05:38.930 "log_get_flags", 00:05:38.930 "log_clear_flag", 00:05:38.930 "log_set_flag", 00:05:38.930 "log_get_level", 00:05:38.930 "log_set_level", 00:05:38.930 "log_get_print_level", 00:05:38.930 "log_set_print_level", 00:05:38.930 "framework_enable_cpumask_locks", 00:05:38.930 "framework_disable_cpumask_locks", 00:05:38.930 "framework_wait_init", 00:05:38.930 "framework_start_init", 00:05:38.930 "scsi_get_devices", 00:05:38.930 "bdev_get_histogram", 00:05:38.930 "bdev_enable_histogram", 00:05:38.930 "bdev_set_qos_limit", 00:05:38.930 "bdev_set_qd_sampling_period", 00:05:38.930 "bdev_get_bdevs", 00:05:38.930 "bdev_reset_iostat", 00:05:38.930 "bdev_get_iostat", 00:05:38.930 "bdev_examine", 00:05:38.930 "bdev_wait_for_examine", 00:05:38.930 "bdev_set_options", 00:05:38.930 "accel_get_stats", 00:05:38.930 "accel_set_options", 00:05:38.930 "accel_set_driver", 00:05:38.930 "accel_crypto_key_destroy", 00:05:38.930 "accel_crypto_keys_get", 00:05:38.930 "accel_crypto_key_create", 00:05:38.930 "accel_assign_opc", 00:05:38.930 "accel_get_module_info", 00:05:38.930 "accel_get_opc_assignments", 00:05:38.930 "vmd_rescan", 00:05:38.930 "vmd_remove_device", 00:05:38.930 "vmd_enable", 00:05:38.930 "sock_get_default_impl", 00:05:38.930 "sock_set_default_impl", 00:05:38.930 "sock_impl_set_options", 00:05:38.930 "sock_impl_get_options", 00:05:38.930 "iobuf_get_stats", 00:05:38.930 "iobuf_set_options", 00:05:38.930 "keyring_get_keys", 00:05:38.930 "framework_get_pci_devices", 00:05:38.930 "framework_get_config", 00:05:38.930 "framework_get_subsystems", 00:05:38.930 "fsdev_set_opts", 00:05:38.930 "fsdev_get_opts", 00:05:38.930 "trace_get_info", 00:05:38.930 "trace_get_tpoint_group_mask", 00:05:38.930 "trace_disable_tpoint_group", 00:05:38.930 "trace_enable_tpoint_group", 00:05:38.930 "trace_clear_tpoint_mask", 00:05:38.930 "trace_set_tpoint_mask", 00:05:38.930 "notify_get_notifications", 00:05:38.930 "notify_get_types", 00:05:38.930 "spdk_get_version", 00:05:38.930 "rpc_get_methods" 00:05:38.930 ] 00:05:38.930 10:37:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:38.930 10:37:38 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.930 10:37:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.930 10:37:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:38.930 10:37:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70223 00:05:38.930 10:37:38 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70223 ']' 00:05:38.930 10:37:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70223 00:05:38.930 10:37:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:38.930 10:37:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.930 10:37:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70223 00:05:38.931 10:37:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.931 10:37:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.931 10:37:38 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70223' 00:05:38.931 killing process with pid 70223 00:05:38.931 10:37:38 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70223 00:05:38.931 10:37:38 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70223 00:05:39.190 00:05:39.190 real 0m1.552s 00:05:39.190 user 0m2.778s 00:05:39.190 sys 0m0.388s 00:05:39.190 ************************************ 00:05:39.190 END TEST spdkcli_tcp 00:05:39.190 ************************************ 00:05:39.190 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.190 10:37:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.450 10:37:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.450 10:37:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.450 10:37:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.450 10:37:39 -- common/autotest_common.sh@10 -- # set +x 00:05:39.450 ************************************ 00:05:39.450 START TEST dpdk_mem_utility 00:05:39.450 ************************************ 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.450 * Looking for test storage... 00:05:39.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.450 10:37:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.450 --rc genhtml_branch_coverage=1 00:05:39.450 --rc genhtml_function_coverage=1 00:05:39.450 --rc genhtml_legend=1 00:05:39.450 --rc geninfo_all_blocks=1 00:05:39.450 --rc geninfo_unexecuted_blocks=1 00:05:39.450 00:05:39.450 ' 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.450 --rc genhtml_branch_coverage=1 00:05:39.450 --rc genhtml_function_coverage=1 00:05:39.450 --rc genhtml_legend=1 00:05:39.450 --rc geninfo_all_blocks=1 00:05:39.450 --rc geninfo_unexecuted_blocks=1 00:05:39.450 00:05:39.450 ' 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.450 --rc genhtml_branch_coverage=1 00:05:39.450 --rc genhtml_function_coverage=1 00:05:39.450 --rc genhtml_legend=1 00:05:39.450 --rc geninfo_all_blocks=1 00:05:39.450 --rc geninfo_unexecuted_blocks=1 00:05:39.450 00:05:39.450 ' 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.450 --rc genhtml_branch_coverage=1 00:05:39.450 --rc genhtml_function_coverage=1 00:05:39.450 --rc genhtml_legend=1 00:05:39.450 --rc geninfo_all_blocks=1 00:05:39.450 --rc geninfo_unexecuted_blocks=1 00:05:39.450 00:05:39.450 ' 00:05:39.450 10:37:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:39.450 10:37:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70323 00:05:39.450 10:37:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70323 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70323 ']' 00:05:39.450 10:37:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.450 10:37:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.450 [2024-12-16 10:37:39.427466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:39.450 [2024-12-16 10:37:39.427704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70323 ] 00:05:39.711 [2024-12-16 10:37:39.562560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.711 [2024-12-16 10:37:39.595696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.654 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.654 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:40.654 10:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.654 10:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.654 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.654 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.654 { 00:05:40.654 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.654 } 00:05:40.654 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.654 10:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:40.654 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:40.654 1 heaps totaling size 860.000000 MiB 00:05:40.654 size: 860.000000 MiB heap id: 0 00:05:40.654 end heaps---------- 00:05:40.654 9 mempools totaling size 642.649841 MiB 00:05:40.654 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.654 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.654 size: 92.545471 MiB name: bdev_io_70323 00:05:40.654 size: 51.011292 MiB name: evtpool_70323 00:05:40.654 size: 50.003479 MiB name: msgpool_70323 00:05:40.654 size: 36.509338 MiB name: fsdev_io_70323 00:05:40.654 size: 21.763794 MiB name: PDU_Pool 00:05:40.654 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.654 size: 0.026123 MiB name: Session_Pool 00:05:40.654 end mempools------- 00:05:40.654 6 memzones totaling size 4.142822 MiB 00:05:40.654 size: 1.000366 MiB name: RG_ring_0_70323 00:05:40.654 size: 1.000366 MiB name: RG_ring_1_70323 00:05:40.654 size: 1.000366 MiB name: RG_ring_4_70323 00:05:40.654 size: 1.000366 MiB name: RG_ring_5_70323 00:05:40.654 size: 0.125366 MiB name: RG_ring_2_70323 00:05:40.654 size: 0.015991 MiB name: RG_ring_3_70323 00:05:40.654 end memzones------- 00:05:40.654 10:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.654 heap id: 0 total size: 860.000000 MiB number of busy elements: 316 number of free elements: 16 00:05:40.654 list of free elements. size: 13.934875 MiB 00:05:40.654 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:40.654 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:40.654 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:40.654 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:40.654 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:40.654 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:40.654 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:40.654 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:40.654 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:40.654 element at address: 0x20001d800000 with size: 0.566956 MiB 00:05:40.654 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:40.654 element at address: 0x200003e00000 with size: 0.487732 MiB 00:05:40.654 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:40.654 element at address: 0x200007000000 with size: 0.480286 MiB 00:05:40.654 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:40.654 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:40.654 list of standard malloc elements. size: 199.268433 MiB 00:05:40.654 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:40.654 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:40.654 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:40.654 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:40.654 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:40.654 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:40.654 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:40.654 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:40.654 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:40.654 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:40.654 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707af40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:40.655 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891240 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891300 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891480 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891540 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891600 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:40.655 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:40.656 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:40.656 list of memzone associated elements. size: 646.796692 MiB 00:05:40.656 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:40.656 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.656 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:40.656 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.656 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:40.656 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70323_0 00:05:40.656 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:40.656 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70323_0 00:05:40.656 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:40.656 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70323_0 00:05:40.656 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:40.656 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70323_0 00:05:40.656 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:40.656 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.656 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:40.656 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.656 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:40.656 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70323 00:05:40.656 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:40.656 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70323 00:05:40.656 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:40.656 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70323 00:05:40.656 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:40.656 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.656 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:40.656 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.656 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:40.656 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.656 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:40.656 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.656 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:40.656 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70323 00:05:40.656 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:40.656 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70323 00:05:40.657 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:40.657 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70323 00:05:40.657 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:40.657 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70323 00:05:40.657 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:40.657 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70323 00:05:40.657 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:40.657 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70323 00:05:40.657 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:40.657 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.657 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:40.657 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.657 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:40.657 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.657 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:05:40.657 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70323 00:05:40.657 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:40.657 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.657 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:40.657 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.657 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:05:40.657 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70323 00:05:40.657 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:40.657 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.657 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:40.657 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70323 00:05:40.657 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:40.657 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70323 00:05:40.657 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:05:40.657 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70323 00:05:40.657 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:40.657 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.657 10:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.657 10:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70323 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70323 ']' 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70323 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70323 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70323' 00:05:40.657 killing process with pid 70323 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70323 00:05:40.657 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70323 00:05:40.917 00:05:40.917 real 0m1.468s 00:05:40.917 user 0m1.520s 00:05:40.917 sys 0m0.357s 00:05:40.917 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.917 10:37:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.917 ************************************ 00:05:40.917 END TEST dpdk_mem_utility 00:05:40.917 ************************************ 00:05:40.917 10:37:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:40.917 10:37:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.917 10:37:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.917 10:37:40 -- common/autotest_common.sh@10 -- # set +x 00:05:40.917 ************************************ 00:05:40.917 START TEST event 00:05:40.917 ************************************ 00:05:40.917 10:37:40 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:40.917 * Looking for test storage... 00:05:40.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:40.917 10:37:40 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.917 10:37:40 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.917 10:37:40 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.917 10:37:40 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.917 10:37:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.917 10:37:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.917 10:37:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.917 10:37:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.917 10:37:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.917 10:37:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.917 10:37:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.917 10:37:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.917 10:37:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.917 10:37:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.917 10:37:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.917 10:37:40 event -- scripts/common.sh@344 -- # case "$op" in 00:05:40.917 10:37:40 event -- scripts/common.sh@345 -- # : 1 00:05:40.917 10:37:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.917 10:37:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.917 10:37:40 event -- scripts/common.sh@365 -- # decimal 1 00:05:40.917 10:37:40 event -- scripts/common.sh@353 -- # local d=1 00:05:40.917 10:37:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.917 10:37:40 event -- scripts/common.sh@355 -- # echo 1 00:05:40.917 10:37:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.917 10:37:40 event -- scripts/common.sh@366 -- # decimal 2 00:05:40.917 10:37:40 event -- scripts/common.sh@353 -- # local d=2 00:05:40.917 10:37:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.917 10:37:40 event -- scripts/common.sh@355 -- # echo 2 00:05:40.917 10:37:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.917 10:37:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.917 10:37:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.917 10:37:40 event -- scripts/common.sh@368 -- # return 0 00:05:40.917 10:37:40 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.917 10:37:40 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.917 --rc genhtml_branch_coverage=1 00:05:40.917 --rc genhtml_function_coverage=1 00:05:40.917 --rc genhtml_legend=1 00:05:40.917 --rc geninfo_all_blocks=1 00:05:40.917 --rc geninfo_unexecuted_blocks=1 00:05:40.917 00:05:40.917 ' 00:05:40.917 10:37:40 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.917 --rc genhtml_branch_coverage=1 00:05:40.917 --rc genhtml_function_coverage=1 00:05:40.917 --rc genhtml_legend=1 00:05:40.917 --rc geninfo_all_blocks=1 00:05:40.918 --rc geninfo_unexecuted_blocks=1 00:05:40.918 00:05:40.918 ' 00:05:40.918 10:37:40 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.918 --rc genhtml_branch_coverage=1 00:05:40.918 --rc genhtml_function_coverage=1 00:05:40.918 --rc genhtml_legend=1 00:05:40.918 --rc geninfo_all_blocks=1 00:05:40.918 --rc geninfo_unexecuted_blocks=1 00:05:40.918 00:05:40.918 ' 00:05:40.918 10:37:40 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.918 --rc genhtml_branch_coverage=1 00:05:40.918 --rc genhtml_function_coverage=1 00:05:40.918 --rc genhtml_legend=1 00:05:40.918 --rc geninfo_all_blocks=1 00:05:40.918 --rc geninfo_unexecuted_blocks=1 00:05:40.918 00:05:40.918 ' 00:05:40.918 10:37:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:40.918 10:37:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.918 10:37:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.918 10:37:40 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:40.918 10:37:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.918 10:37:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.918 ************************************ 00:05:40.918 START TEST event_perf 00:05:40.918 ************************************ 00:05:40.918 10:37:40 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.176 Running I/O for 1 seconds...[2024-12-16 10:37:40.915499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:41.176 [2024-12-16 10:37:40.915694] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70399 ] 00:05:41.176 [2024-12-16 10:37:41.050642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.176 [2024-12-16 10:37:41.085422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.176 [2024-12-16 10:37:41.085711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.176 [2024-12-16 10:37:41.086074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.176 Running I/O for 1 seconds...[2024-12-16 10:37:41.086130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.554 00:05:42.554 lcore 0: 191143 00:05:42.554 lcore 1: 191137 00:05:42.554 lcore 2: 191140 00:05:42.554 lcore 3: 191141 00:05:42.554 done. 00:05:42.554 00:05:42.554 real 0m1.259s 00:05:42.554 user 0m4.052s 00:05:42.554 sys 0m0.079s 00:05:42.554 10:37:42 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.554 10:37:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.554 ************************************ 00:05:42.554 END TEST event_perf 00:05:42.554 ************************************ 00:05:42.554 10:37:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:42.554 10:37:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:42.554 10:37:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.554 10:37:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.554 ************************************ 00:05:42.554 START TEST event_reactor 00:05:42.554 ************************************ 00:05:42.554 10:37:42 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:42.554 [2024-12-16 10:37:42.225787] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:42.554 [2024-12-16 10:37:42.225898] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70438 ] 00:05:42.554 [2024-12-16 10:37:42.361677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.554 [2024-12-16 10:37:42.393810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.497 test_start 00:05:43.497 oneshot 00:05:43.497 tick 100 00:05:43.497 tick 100 00:05:43.497 tick 250 00:05:43.497 tick 100 00:05:43.497 tick 100 00:05:43.497 tick 100 00:05:43.497 tick 250 00:05:43.497 tick 500 00:05:43.497 tick 100 00:05:43.497 tick 100 00:05:43.497 tick 250 00:05:43.497 tick 100 00:05:43.497 tick 100 00:05:43.497 test_end 00:05:43.497 00:05:43.497 real 0m1.251s 00:05:43.497 user 0m1.080s 00:05:43.497 sys 0m0.062s 00:05:43.497 ************************************ 00:05:43.497 END TEST event_reactor 00:05:43.497 ************************************ 00:05:43.497 10:37:43 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.497 10:37:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:43.758 10:37:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.758 10:37:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:43.758 10:37:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.758 10:37:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.758 ************************************ 00:05:43.758 START TEST event_reactor_perf 00:05:43.758 ************************************ 00:05:43.758 10:37:43 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.758 [2024-12-16 10:37:43.531191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:43.758 [2024-12-16 10:37:43.531313] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70474 ] 00:05:43.758 [2024-12-16 10:37:43.666687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.758 [2024-12-16 10:37:43.701368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.144 test_start 00:05:45.144 test_end 00:05:45.144 Performance: 314235 events per second 00:05:45.144 00:05:45.144 real 0m1.254s 00:05:45.144 user 0m1.088s 00:05:45.144 sys 0m0.059s 00:05:45.144 ************************************ 00:05:45.144 END TEST event_reactor_perf 00:05:45.145 ************************************ 00:05:45.145 10:37:44 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.145 10:37:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.145 10:37:44 event -- event/event.sh@49 -- # uname -s 00:05:45.145 10:37:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:45.145 10:37:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:45.145 10:37:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.145 10:37:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.145 10:37:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.145 ************************************ 00:05:45.145 START TEST event_scheduler 00:05:45.145 ************************************ 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:45.145 * Looking for test storage... 00:05:45.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.145 10:37:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:45.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.145 --rc genhtml_branch_coverage=1 00:05:45.145 --rc genhtml_function_coverage=1 00:05:45.145 --rc genhtml_legend=1 00:05:45.145 --rc geninfo_all_blocks=1 00:05:45.145 --rc geninfo_unexecuted_blocks=1 00:05:45.145 00:05:45.145 ' 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:45.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.145 --rc genhtml_branch_coverage=1 00:05:45.145 --rc genhtml_function_coverage=1 00:05:45.145 --rc genhtml_legend=1 00:05:45.145 --rc geninfo_all_blocks=1 00:05:45.145 --rc geninfo_unexecuted_blocks=1 00:05:45.145 00:05:45.145 ' 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:45.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.145 --rc genhtml_branch_coverage=1 00:05:45.145 --rc genhtml_function_coverage=1 00:05:45.145 --rc genhtml_legend=1 00:05:45.145 --rc geninfo_all_blocks=1 00:05:45.145 --rc geninfo_unexecuted_blocks=1 00:05:45.145 00:05:45.145 ' 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:45.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.145 --rc genhtml_branch_coverage=1 00:05:45.145 --rc genhtml_function_coverage=1 00:05:45.145 --rc genhtml_legend=1 00:05:45.145 --rc geninfo_all_blocks=1 00:05:45.145 --rc geninfo_unexecuted_blocks=1 00:05:45.145 00:05:45.145 ' 00:05:45.145 10:37:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:45.145 10:37:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70539 00:05:45.145 10:37:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.145 10:37:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70539 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70539 ']' 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.145 10:37:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.145 10:37:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:45.145 [2024-12-16 10:37:45.025793] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:45.145 [2024-12-16 10:37:45.025903] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70539 ] 00:05:45.404 [2024-12-16 10:37:45.161529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.404 [2024-12-16 10:37:45.197560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.404 [2024-12-16 10:37:45.197774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.404 [2024-12-16 10:37:45.198159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.404 [2024-12-16 10:37:45.198234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:45.974 10:37:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.974 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.974 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.974 POWER: Cannot set governor of lcore 0 to performance 00:05:45.974 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.974 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.974 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:45.974 POWER: Unable to set Power Management Environment for lcore 0 00:05:45.974 [2024-12-16 10:37:45.879504] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:45.974 [2024-12-16 10:37:45.879535] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:45.974 [2024-12-16 10:37:45.879553] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:45.974 [2024-12-16 10:37:45.879589] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.974 [2024-12-16 10:37:45.879597] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.974 [2024-12-16 10:37:45.879616] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.974 10:37:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 [2024-12-16 10:37:45.936948] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.974 10:37:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.974 10:37:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 ************************************ 00:05:45.975 START TEST scheduler_create_thread 00:05:45.975 ************************************ 00:05:45.975 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:45.975 10:37:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.975 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.975 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 2 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 3 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 4 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 5 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 6 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 7 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 8 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 9 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 10 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.236 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.807 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.807 10:37:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:46.807 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.807 10:37:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.186 10:37:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.186 10:37:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:48.186 10:37:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:48.186 10:37:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.186 10:37:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.120 10:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.120 00:05:49.120 real 0m3.095s 00:05:49.120 user 0m0.021s 00:05:49.120 sys 0m0.002s 00:05:49.120 10:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.120 10:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.120 ************************************ 00:05:49.120 END TEST scheduler_create_thread 00:05:49.120 ************************************ 00:05:49.120 10:37:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:49.120 10:37:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70539 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70539 ']' 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70539 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70539 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70539' 00:05:49.120 killing process with pid 70539 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70539 00:05:49.120 10:37:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70539 00:05:49.730 [2024-12-16 10:37:49.428196] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:49.730 00:05:49.730 real 0m4.775s 00:05:49.730 user 0m9.070s 00:05:49.730 sys 0m0.308s 00:05:49.730 10:37:49 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.730 ************************************ 00:05:49.730 END TEST event_scheduler 00:05:49.730 ************************************ 00:05:49.730 10:37:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.730 10:37:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:49.730 10:37:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:49.730 10:37:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.730 10:37:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.730 10:37:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.730 ************************************ 00:05:49.730 START TEST app_repeat 00:05:49.730 ************************************ 00:05:49.730 10:37:49 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70644 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.730 Process app_repeat pid: 70644 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70644' 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.730 spdk_app_start Round 0 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70644 /var/tmp/spdk-nbd.sock 00:05:49.730 10:37:49 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:49.730 10:37:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70644 ']' 00:05:49.730 10:37:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.730 10:37:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.730 10:37:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.730 10:37:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.730 10:37:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.730 [2024-12-16 10:37:49.691613] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:49.730 [2024-12-16 10:37:49.691720] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70644 ] 00:05:49.989 [2024-12-16 10:37:49.826191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.989 [2024-12-16 10:37:49.855222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.989 [2024-12-16 10:37:49.855254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.556 10:37:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.556 10:37:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:50.556 10:37:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.815 Malloc0 00:05:50.815 10:37:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.073 Malloc1 00:05:51.073 10:37:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.073 10:37:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.334 /dev/nbd0 00:05:51.334 10:37:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.334 10:37:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.334 1+0 records in 00:05:51.334 1+0 records out 00:05:51.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488902 s, 8.4 MB/s 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:51.334 10:37:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:51.334 10:37:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.334 10:37:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.334 10:37:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.595 /dev/nbd1 00:05:51.595 10:37:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.595 10:37:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.595 1+0 records in 00:05:51.595 1+0 records out 00:05:51.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380184 s, 10.8 MB/s 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:51.595 10:37:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:51.595 10:37:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.595 10:37:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.595 10:37:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.595 10:37:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.595 10:37:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.855 { 00:05:51.855 "nbd_device": "/dev/nbd0", 00:05:51.855 "bdev_name": "Malloc0" 00:05:51.855 }, 00:05:51.855 { 00:05:51.855 "nbd_device": "/dev/nbd1", 00:05:51.855 "bdev_name": "Malloc1" 00:05:51.855 } 00:05:51.855 ]' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.855 { 00:05:51.855 "nbd_device": "/dev/nbd0", 00:05:51.855 "bdev_name": "Malloc0" 00:05:51.855 }, 00:05:51.855 { 00:05:51.855 "nbd_device": "/dev/nbd1", 00:05:51.855 "bdev_name": "Malloc1" 00:05:51.855 } 00:05:51.855 ]' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.855 /dev/nbd1' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.855 /dev/nbd1' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.855 256+0 records in 00:05:51.855 256+0 records out 00:05:51.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497548 s, 211 MB/s 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.855 256+0 records in 00:05:51.855 256+0 records out 00:05:51.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204531 s, 51.3 MB/s 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.855 256+0 records in 00:05:51.855 256+0 records out 00:05:51.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192574 s, 54.5 MB/s 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.855 10:37:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.114 10:37:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.114 10:37:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.372 10:37:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.372 10:37:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.629 10:37:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.630 [2024-12-16 10:37:52.606320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.888 [2024-12-16 10:37:52.632807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.888 [2024-12-16 10:37:52.632915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.888 [2024-12-16 10:37:52.661771] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.888 [2024-12-16 10:37:52.661816] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.169 10:37:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.169 spdk_app_start Round 1 00:05:56.169 10:37:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.169 10:37:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70644 /var/tmp/spdk-nbd.sock 00:05:56.169 10:37:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70644 ']' 00:05:56.169 10:37:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.169 10:37:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.169 10:37:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.169 10:37:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.169 10:37:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.169 10:37:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.169 10:37:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:56.169 10:37:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.169 Malloc0 00:05:56.169 10:37:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.169 Malloc1 00:05:56.169 10:37:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.169 10:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.428 /dev/nbd0 00:05:56.428 10:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.428 10:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.428 1+0 records in 00:05:56.428 1+0 records out 00:05:56.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280469 s, 14.6 MB/s 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:56.428 10:37:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:56.428 10:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.428 10:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.428 10:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.687 /dev/nbd1 00:05:56.687 10:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.687 10:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.687 1+0 records in 00:05:56.687 1+0 records out 00:05:56.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189008 s, 21.7 MB/s 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:56.687 10:37:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:56.687 10:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.687 10:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.687 10:37:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.687 10:37:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.687 10:37:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.946 { 00:05:56.946 "nbd_device": "/dev/nbd0", 00:05:56.946 "bdev_name": "Malloc0" 00:05:56.946 }, 00:05:56.946 { 00:05:56.946 "nbd_device": "/dev/nbd1", 00:05:56.946 "bdev_name": "Malloc1" 00:05:56.946 } 00:05:56.946 ]' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.946 { 00:05:56.946 "nbd_device": "/dev/nbd0", 00:05:56.946 "bdev_name": "Malloc0" 00:05:56.946 }, 00:05:56.946 { 00:05:56.946 "nbd_device": "/dev/nbd1", 00:05:56.946 "bdev_name": "Malloc1" 00:05:56.946 } 00:05:56.946 ]' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.946 /dev/nbd1' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.946 /dev/nbd1' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.946 256+0 records in 00:05:56.946 256+0 records out 00:05:56.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573041 s, 183 MB/s 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.946 256+0 records in 00:05:56.946 256+0 records out 00:05:56.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142474 s, 73.6 MB/s 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.946 256+0 records in 00:05:56.946 256+0 records out 00:05:56.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157443 s, 66.6 MB/s 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.946 10:37:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.205 10:37:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.463 10:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.721 10:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.721 10:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.721 10:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.721 10:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.721 10:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.721 10:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.721 10:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.721 10:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.722 10:37:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.722 10:37:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.722 10:37:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.722 10:37:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.722 10:37:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.980 10:37:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.980 [2024-12-16 10:37:57.842315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.980 [2024-12-16 10:37:57.868405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.980 [2024-12-16 10:37:57.868585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.980 [2024-12-16 10:37:57.897765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.980 [2024-12-16 10:37:57.897810] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.287 10:38:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.287 spdk_app_start Round 2 00:06:01.287 10:38:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:01.287 10:38:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70644 /var/tmp/spdk-nbd.sock 00:06:01.287 10:38:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70644 ']' 00:06:01.287 10:38:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.287 10:38:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.287 10:38:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.287 10:38:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.287 10:38:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.287 10:38:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.287 10:38:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:01.287 10:38:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.287 Malloc0 00:06:01.287 10:38:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.546 Malloc1 00:06:01.546 10:38:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.546 10:38:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.804 /dev/nbd0 00:06:01.804 10:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.804 10:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.804 10:38:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:01.804 10:38:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:01.804 10:38:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.804 10:38:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.805 1+0 records in 00:06:01.805 1+0 records out 00:06:01.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000138252 s, 29.6 MB/s 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.805 10:38:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:01.805 10:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.805 10:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.805 10:38:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.063 /dev/nbd1 00:06:02.063 10:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.063 10:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.063 1+0 records in 00:06:02.063 1+0 records out 00:06:02.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342219 s, 12.0 MB/s 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:02.063 10:38:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:02.063 10:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.063 10:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.063 10:38:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.063 10:38:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.063 10:38:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.063 10:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.063 { 00:06:02.063 "nbd_device": "/dev/nbd0", 00:06:02.063 "bdev_name": "Malloc0" 00:06:02.063 }, 00:06:02.063 { 00:06:02.064 "nbd_device": "/dev/nbd1", 00:06:02.064 "bdev_name": "Malloc1" 00:06:02.064 } 00:06:02.064 ]' 00:06:02.064 10:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.321 { 00:06:02.321 "nbd_device": "/dev/nbd0", 00:06:02.321 "bdev_name": "Malloc0" 00:06:02.321 }, 00:06:02.321 { 00:06:02.321 "nbd_device": "/dev/nbd1", 00:06:02.321 "bdev_name": "Malloc1" 00:06:02.321 } 00:06:02.321 ]' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.321 /dev/nbd1' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.321 /dev/nbd1' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.321 256+0 records in 00:06:02.321 256+0 records out 00:06:02.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00838896 s, 125 MB/s 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.321 256+0 records in 00:06:02.321 256+0 records out 00:06:02.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136971 s, 76.6 MB/s 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.321 256+0 records in 00:06:02.321 256+0 records out 00:06:02.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146459 s, 71.6 MB/s 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.321 10:38:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.579 10:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.838 10:38:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.838 10:38:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.097 10:38:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.356 [2024-12-16 10:38:03.085684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.356 [2024-12-16 10:38:03.113674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.356 [2024-12-16 10:38:03.113682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.356 [2024-12-16 10:38:03.142848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.356 [2024-12-16 10:38:03.142891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.643 10:38:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70644 /var/tmp/spdk-nbd.sock 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70644 ']' 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:06.643 10:38:06 event.app_repeat -- event/event.sh@39 -- # killprocess 70644 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70644 ']' 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70644 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70644 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.643 killing process with pid 70644 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70644' 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70644 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70644 00:06:06.643 spdk_app_start is called in Round 0. 00:06:06.643 Shutdown signal received, stop current app iteration 00:06:06.643 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:06.643 spdk_app_start is called in Round 1. 00:06:06.643 Shutdown signal received, stop current app iteration 00:06:06.643 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:06.643 spdk_app_start is called in Round 2. 00:06:06.643 Shutdown signal received, stop current app iteration 00:06:06.643 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:06.643 spdk_app_start is called in Round 3. 00:06:06.643 Shutdown signal received, stop current app iteration 00:06:06.643 10:38:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.643 10:38:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.643 00:06:06.643 real 0m16.696s 00:06:06.643 user 0m37.238s 00:06:06.643 sys 0m1.960s 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.643 ************************************ 00:06:06.643 END TEST app_repeat 00:06:06.643 ************************************ 00:06:06.643 10:38:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.643 10:38:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.643 10:38:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.643 10:38:06 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.643 10:38:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.643 10:38:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.643 ************************************ 00:06:06.643 START TEST cpu_locks 00:06:06.643 ************************************ 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.643 * Looking for test storage... 00:06:06.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.643 10:38:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.643 --rc genhtml_branch_coverage=1 00:06:06.643 --rc genhtml_function_coverage=1 00:06:06.643 --rc genhtml_legend=1 00:06:06.643 --rc geninfo_all_blocks=1 00:06:06.643 --rc geninfo_unexecuted_blocks=1 00:06:06.643 00:06:06.643 ' 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.643 --rc genhtml_branch_coverage=1 00:06:06.643 --rc genhtml_function_coverage=1 00:06:06.643 --rc genhtml_legend=1 00:06:06.643 --rc geninfo_all_blocks=1 00:06:06.643 --rc geninfo_unexecuted_blocks=1 00:06:06.643 00:06:06.643 ' 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.643 --rc genhtml_branch_coverage=1 00:06:06.643 --rc genhtml_function_coverage=1 00:06:06.643 --rc genhtml_legend=1 00:06:06.643 --rc geninfo_all_blocks=1 00:06:06.643 --rc geninfo_unexecuted_blocks=1 00:06:06.643 00:06:06.643 ' 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.643 --rc genhtml_branch_coverage=1 00:06:06.643 --rc genhtml_function_coverage=1 00:06:06.643 --rc genhtml_legend=1 00:06:06.643 --rc geninfo_all_blocks=1 00:06:06.643 --rc geninfo_unexecuted_blocks=1 00:06:06.643 00:06:06.643 ' 00:06:06.643 10:38:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.643 10:38:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.643 10:38:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.643 10:38:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.643 10:38:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.643 ************************************ 00:06:06.643 START TEST default_locks 00:06:06.643 ************************************ 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71065 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71065 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71065 ']' 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.643 10:38:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.905 [2024-12-16 10:38:06.636663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:06.905 [2024-12-16 10:38:06.636827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71065 ] 00:06:06.905 [2024-12-16 10:38:06.774062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.905 [2024-12-16 10:38:06.827246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71065 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71065 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71065 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71065 ']' 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71065 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71065 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.849 killing process with pid 71065 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71065' 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71065 00:06:07.849 10:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71065 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71065 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71065 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71065 00:06:08.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71065 ']' 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.110 ERROR: process (pid: 71065) is no longer running 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.110 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71065) - No such process 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.110 00:06:08.110 real 0m1.528s 00:06:08.110 user 0m1.489s 00:06:08.110 sys 0m0.533s 00:06:08.110 ************************************ 00:06:08.110 END TEST default_locks 00:06:08.110 ************************************ 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.110 10:38:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.371 10:38:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:08.371 10:38:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.371 10:38:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.371 10:38:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.371 ************************************ 00:06:08.371 START TEST default_locks_via_rpc 00:06:08.371 ************************************ 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71107 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71107 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71107 ']' 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.371 10:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.371 [2024-12-16 10:38:08.228674] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:08.371 [2024-12-16 10:38:08.228823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71107 ] 00:06:08.632 [2024-12-16 10:38:08.359607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.632 [2024-12-16 10:38:08.413333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71107 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71107 00:06:09.203 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71107 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71107 ']' 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71107 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71107 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.466 killing process with pid 71107 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71107' 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71107 00:06:09.466 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71107 00:06:09.727 00:06:09.727 real 0m1.486s 00:06:09.727 user 0m1.460s 00:06:09.727 sys 0m0.505s 00:06:09.727 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.727 ************************************ 00:06:09.727 END TEST default_locks_via_rpc 00:06:09.727 ************************************ 00:06:09.727 10:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.727 10:38:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.727 10:38:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.727 10:38:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.727 10:38:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.727 ************************************ 00:06:09.727 START TEST non_locking_app_on_locked_coremask 00:06:09.727 ************************************ 00:06:09.727 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71159 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71159 /var/tmp/spdk.sock 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71159 ']' 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.728 10:38:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.988 [2024-12-16 10:38:09.783082] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:09.988 [2024-12-16 10:38:09.783232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71159 ] 00:06:09.988 [2024-12-16 10:38:09.920210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.988 [2024-12-16 10:38:09.974293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71175 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71175 /var/tmp/spdk2.sock 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71175 ']' 00:06:10.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.933 10:38:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.933 [2024-12-16 10:38:10.708467] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:10.933 [2024-12-16 10:38:10.709051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:06:10.933 [2024-12-16 10:38:10.852536] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.933 [2024-12-16 10:38:10.852601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.272 [2024-12-16 10:38:10.957746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71159 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71159 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71159 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71159 ']' 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71159 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.883 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71159 00:06:12.143 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.143 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.143 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71159' 00:06:12.143 killing process with pid 71159 00:06:12.143 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71159 00:06:12.143 10:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71159 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71175 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71175 ']' 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71175 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71175 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71175' 00:06:12.401 killing process with pid 71175 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71175 00:06:12.401 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71175 00:06:12.660 00:06:12.660 real 0m2.908s 00:06:12.660 user 0m3.084s 00:06:12.660 sys 0m0.924s 00:06:12.660 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.661 ************************************ 00:06:12.661 END TEST non_locking_app_on_locked_coremask 00:06:12.661 ************************************ 00:06:12.661 10:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.919 10:38:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.919 10:38:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.919 10:38:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.919 10:38:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.919 ************************************ 00:06:12.919 START TEST locking_app_on_unlocked_coremask 00:06:12.919 ************************************ 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71233 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71233 /var/tmp/spdk.sock 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71233 ']' 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.919 10:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.919 [2024-12-16 10:38:12.718283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:12.919 [2024-12-16 10:38:12.718384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71233 ] 00:06:12.919 [2024-12-16 10:38:12.848401] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.919 [2024-12-16 10:38:12.848442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.919 [2024-12-16 10:38:12.878386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.852 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.852 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.852 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71249 00:06:13.853 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.853 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71249 /var/tmp/spdk2.sock 00:06:13.853 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71249 ']' 00:06:13.853 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.853 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.853 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.853 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.853 10:38:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.853 [2024-12-16 10:38:13.625945] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:13.853 [2024-12-16 10:38:13.626397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71249 ] 00:06:13.853 [2024-12-16 10:38:13.761833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.853 [2024-12-16 10:38:13.818636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.787 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.787 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.787 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71249 00:06:14.787 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.787 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71249 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71233 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71233 ']' 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71233 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71233 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.045 killing process with pid 71233 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71233' 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71233 00:06:15.045 10:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71233 00:06:15.305 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71249 00:06:15.305 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71249 ']' 00:06:15.305 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71249 00:06:15.305 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.305 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.305 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71249 00:06:15.565 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.565 killing process with pid 71249 00:06:15.565 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.565 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71249' 00:06:15.565 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71249 00:06:15.565 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71249 00:06:15.565 00:06:15.566 real 0m2.864s 00:06:15.566 user 0m3.212s 00:06:15.566 sys 0m0.725s 00:06:15.566 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.566 ************************************ 00:06:15.566 END TEST locking_app_on_unlocked_coremask 00:06:15.566 ************************************ 00:06:15.566 10:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 10:38:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.825 10:38:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.825 10:38:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.825 10:38:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 ************************************ 00:06:15.825 START TEST locking_app_on_locked_coremask 00:06:15.825 ************************************ 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71306 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71306 /var/tmp/spdk.sock 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71306 ']' 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 10:38:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.825 [2024-12-16 10:38:15.650391] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:15.825 [2024-12-16 10:38:15.650514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71306 ] 00:06:15.825 [2024-12-16 10:38:15.784097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.083 [2024-12-16 10:38:15.817438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71312 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71312 /var/tmp/spdk2.sock 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71312 /var/tmp/spdk2.sock 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71312 /var/tmp/spdk2.sock 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71312 ']' 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.649 10:38:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.649 [2024-12-16 10:38:16.554778] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:16.649 [2024-12-16 10:38:16.554888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71312 ] 00:06:16.907 [2024-12-16 10:38:16.688738] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71306 has claimed it. 00:06:16.907 [2024-12-16 10:38:16.688789] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.473 ERROR: process (pid: 71312) is no longer running 00:06:17.473 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71312) - No such process 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71306 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71306 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71306 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71306 ']' 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71306 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.473 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71306 00:06:17.733 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.733 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.733 killing process with pid 71306 00:06:17.733 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71306' 00:06:17.733 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71306 00:06:17.733 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71306 00:06:17.733 00:06:17.733 real 0m2.117s 00:06:17.733 user 0m2.400s 00:06:17.733 sys 0m0.486s 00:06:17.733 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.733 ************************************ 00:06:17.733 END TEST locking_app_on_locked_coremask 00:06:17.733 ************************************ 00:06:17.733 10:38:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.993 10:38:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:17.993 10:38:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.993 10:38:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.993 10:38:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.993 ************************************ 00:06:17.993 START TEST locking_overlapped_coremask 00:06:17.993 ************************************ 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71365 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71365 /var/tmp/spdk.sock 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71365 ']' 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.993 10:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.993 [2024-12-16 10:38:17.817280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:17.993 [2024-12-16 10:38:17.817428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71365 ] 00:06:17.993 [2024-12-16 10:38:17.956993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.252 [2024-12-16 10:38:17.990753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.252 [2024-12-16 10:38:17.991056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.252 [2024-12-16 10:38:17.991172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71383 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71383 /var/tmp/spdk2.sock 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71383 /var/tmp/spdk2.sock 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71383 /var/tmp/spdk2.sock 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71383 ']' 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.817 10:38:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.817 [2024-12-16 10:38:18.722722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:18.817 [2024-12-16 10:38:18.722838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71383 ] 00:06:19.075 [2024-12-16 10:38:18.863576] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71365 has claimed it. 00:06:19.075 [2024-12-16 10:38:18.863633] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.643 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71383) - No such process 00:06:19.643 ERROR: process (pid: 71383) is no longer running 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.643 10:38:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71365 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71365 ']' 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71365 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71365 00:06:19.644 killing process with pid 71365 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71365' 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71365 00:06:19.644 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71365 00:06:19.905 00:06:19.905 real 0m1.899s 00:06:19.905 user 0m5.240s 00:06:19.905 sys 0m0.377s 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.905 ************************************ 00:06:19.905 END TEST locking_overlapped_coremask 00:06:19.905 ************************************ 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.905 10:38:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:19.905 10:38:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.905 10:38:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.905 10:38:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.905 ************************************ 00:06:19.905 START TEST locking_overlapped_coremask_via_rpc 00:06:19.905 ************************************ 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71425 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71425 /var/tmp/spdk.sock 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71425 ']' 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.905 10:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:19.905 [2024-12-16 10:38:19.770116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:19.905 [2024-12-16 10:38:19.770232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71425 ] 00:06:20.166 [2024-12-16 10:38:19.906128] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.166 [2024-12-16 10:38:19.906171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.166 [2024-12-16 10:38:19.940493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.166 [2024-12-16 10:38:19.940787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.166 [2024-12-16 10:38:19.940794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71432 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71432 /var/tmp/spdk2.sock 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71432 ']' 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.738 10:38:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.738 [2024-12-16 10:38:20.672360] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:20.738 [2024-12-16 10:38:20.672587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71432 ] 00:06:20.997 [2024-12-16 10:38:20.813571] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.997 [2024-12-16 10:38:20.813620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.997 [2024-12-16 10:38:20.879629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.997 [2024-12-16 10:38:20.883130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.997 [2024-12-16 10:38:20.883193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.563 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.563 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.564 [2024-12-16 10:38:21.533040] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71425 has claimed it. 00:06:21.564 request: 00:06:21.564 { 00:06:21.564 "method": "framework_enable_cpumask_locks", 00:06:21.564 "req_id": 1 00:06:21.564 } 00:06:21.564 Got JSON-RPC error response 00:06:21.564 response: 00:06:21.564 { 00:06:21.564 "code": -32603, 00:06:21.564 "message": "Failed to claim CPU core: 2" 00:06:21.564 } 00:06:21.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71425 /var/tmp/spdk.sock 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71425 ']' 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.564 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71432 /var/tmp/spdk2.sock 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71432 ']' 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.867 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.126 ************************************ 00:06:22.126 END TEST locking_overlapped_coremask_via_rpc 00:06:22.126 ************************************ 00:06:22.126 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.126 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.126 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:22.126 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.126 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.126 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.126 00:06:22.126 real 0m2.248s 00:06:22.126 user 0m1.074s 00:06:22.126 sys 0m0.108s 00:06:22.126 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.126 10:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.126 10:38:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:22.126 10:38:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71425 ]] 00:06:22.126 10:38:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71425 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71425 ']' 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71425 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71425 00:06:22.126 killing process with pid 71425 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71425' 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71425 00:06:22.126 10:38:21 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71425 00:06:22.385 10:38:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71432 ]] 00:06:22.385 10:38:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71432 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71432 ']' 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71432 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71432 00:06:22.385 killing process with pid 71432 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71432' 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71432 00:06:22.385 10:38:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71432 00:06:22.644 10:38:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.644 10:38:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.644 10:38:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71425 ]] 00:06:22.644 10:38:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71425 00:06:22.644 Process with pid 71425 is not found 00:06:22.644 Process with pid 71432 is not found 00:06:22.644 10:38:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71425 ']' 00:06:22.644 10:38:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71425 00:06:22.644 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71425) - No such process 00:06:22.644 10:38:22 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71425 is not found' 00:06:22.644 10:38:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71432 ]] 00:06:22.644 10:38:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71432 00:06:22.644 10:38:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71432 ']' 00:06:22.644 10:38:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71432 00:06:22.644 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71432) - No such process 00:06:22.644 10:38:22 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71432 is not found' 00:06:22.644 10:38:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.644 ************************************ 00:06:22.644 END TEST cpu_locks 00:06:22.644 ************************************ 00:06:22.644 00:06:22.644 real 0m16.133s 00:06:22.644 user 0m28.148s 00:06:22.644 sys 0m4.394s 00:06:22.644 10:38:22 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.644 10:38:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.644 00:06:22.644 real 0m41.834s 00:06:22.644 user 1m20.845s 00:06:22.644 sys 0m7.079s 00:06:22.644 ************************************ 00:06:22.644 END TEST event 00:06:22.644 ************************************ 00:06:22.644 10:38:22 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.644 10:38:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.644 10:38:22 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.644 10:38:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.644 10:38:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.644 10:38:22 -- common/autotest_common.sh@10 -- # set +x 00:06:22.644 ************************************ 00:06:22.644 START TEST thread 00:06:22.644 ************************************ 00:06:22.644 10:38:22 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.905 * Looking for test storage... 00:06:22.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.905 10:38:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.905 10:38:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.905 10:38:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.905 10:38:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.905 10:38:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.905 10:38:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.905 10:38:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.905 10:38:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.905 10:38:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.905 10:38:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.905 10:38:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.905 10:38:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:22.905 10:38:22 thread -- scripts/common.sh@345 -- # : 1 00:06:22.905 10:38:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.905 10:38:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.905 10:38:22 thread -- scripts/common.sh@365 -- # decimal 1 00:06:22.905 10:38:22 thread -- scripts/common.sh@353 -- # local d=1 00:06:22.905 10:38:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.905 10:38:22 thread -- scripts/common.sh@355 -- # echo 1 00:06:22.905 10:38:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.905 10:38:22 thread -- scripts/common.sh@366 -- # decimal 2 00:06:22.905 10:38:22 thread -- scripts/common.sh@353 -- # local d=2 00:06:22.905 10:38:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.905 10:38:22 thread -- scripts/common.sh@355 -- # echo 2 00:06:22.905 10:38:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.905 10:38:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.905 10:38:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.905 10:38:22 thread -- scripts/common.sh@368 -- # return 0 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.905 --rc genhtml_branch_coverage=1 00:06:22.905 --rc genhtml_function_coverage=1 00:06:22.905 --rc genhtml_legend=1 00:06:22.905 --rc geninfo_all_blocks=1 00:06:22.905 --rc geninfo_unexecuted_blocks=1 00:06:22.905 00:06:22.905 ' 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.905 --rc genhtml_branch_coverage=1 00:06:22.905 --rc genhtml_function_coverage=1 00:06:22.905 --rc genhtml_legend=1 00:06:22.905 --rc geninfo_all_blocks=1 00:06:22.905 --rc geninfo_unexecuted_blocks=1 00:06:22.905 00:06:22.905 ' 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.905 --rc genhtml_branch_coverage=1 00:06:22.905 --rc genhtml_function_coverage=1 00:06:22.905 --rc genhtml_legend=1 00:06:22.905 --rc geninfo_all_blocks=1 00:06:22.905 --rc geninfo_unexecuted_blocks=1 00:06:22.905 00:06:22.905 ' 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.905 --rc genhtml_branch_coverage=1 00:06:22.905 --rc genhtml_function_coverage=1 00:06:22.905 --rc genhtml_legend=1 00:06:22.905 --rc geninfo_all_blocks=1 00:06:22.905 --rc geninfo_unexecuted_blocks=1 00:06:22.905 00:06:22.905 ' 00:06:22.905 10:38:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.905 10:38:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.905 ************************************ 00:06:22.905 START TEST thread_poller_perf 00:06:22.905 ************************************ 00:06:22.905 10:38:22 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.905 [2024-12-16 10:38:22.804082] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:22.905 [2024-12-16 10:38:22.804190] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71570 ] 00:06:23.165 [2024-12-16 10:38:22.938112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.165 [2024-12-16 10:38:22.971076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.165 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.108 [2024-12-16T10:38:24.097Z] ====================================== 00:06:24.108 [2024-12-16T10:38:24.097Z] busy:2616440482 (cyc) 00:06:24.108 [2024-12-16T10:38:24.097Z] total_run_count: 305000 00:06:24.108 [2024-12-16T10:38:24.097Z] tsc_hz: 2600000000 (cyc) 00:06:24.108 [2024-12-16T10:38:24.097Z] ====================================== 00:06:24.108 [2024-12-16T10:38:24.097Z] poller_cost: 8578 (cyc), 3299 (nsec) 00:06:24.108 00:06:24.108 real 0m1.289s 00:06:24.108 user 0m1.121s 00:06:24.108 sys 0m0.060s 00:06:24.108 10:38:24 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.108 ************************************ 00:06:24.108 END TEST thread_poller_perf 00:06:24.108 ************************************ 00:06:24.108 10:38:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.369 10:38:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.369 10:38:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:24.369 10:38:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.369 10:38:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.369 ************************************ 00:06:24.369 START TEST thread_poller_perf 00:06:24.369 ************************************ 00:06:24.369 10:38:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.369 [2024-12-16 10:38:24.164102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:24.369 [2024-12-16 10:38:24.164403] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71601 ] 00:06:24.369 [2024-12-16 10:38:24.297924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.369 [2024-12-16 10:38:24.349005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.369 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.763 [2024-12-16T10:38:25.752Z] ====================================== 00:06:25.763 [2024-12-16T10:38:25.752Z] busy:2603724368 (cyc) 00:06:25.763 [2024-12-16T10:38:25.752Z] total_run_count: 3958000 00:06:25.763 [2024-12-16T10:38:25.752Z] tsc_hz: 2600000000 (cyc) 00:06:25.763 [2024-12-16T10:38:25.752Z] ====================================== 00:06:25.763 [2024-12-16T10:38:25.752Z] poller_cost: 657 (cyc), 252 (nsec) 00:06:25.763 00:06:25.763 real 0m1.300s 00:06:25.763 user 0m1.122s 00:06:25.763 sys 0m0.069s 00:06:25.763 10:38:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.763 10:38:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.763 ************************************ 00:06:25.763 END TEST thread_poller_perf 00:06:25.763 ************************************ 00:06:25.763 10:38:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:25.763 00:06:25.763 real 0m2.864s 00:06:25.763 user 0m2.355s 00:06:25.763 sys 0m0.258s 00:06:25.763 10:38:25 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.763 ************************************ 00:06:25.763 END TEST thread 00:06:25.763 ************************************ 00:06:25.763 10:38:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.763 10:38:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:25.763 10:38:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.763 10:38:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.763 10:38:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.763 10:38:25 -- common/autotest_common.sh@10 -- # set +x 00:06:25.763 ************************************ 00:06:25.763 START TEST app_cmdline 00:06:25.763 ************************************ 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.763 * Looking for test storage... 00:06:25.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:25.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.763 10:38:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.763 --rc genhtml_branch_coverage=1 00:06:25.763 --rc genhtml_function_coverage=1 00:06:25.763 --rc genhtml_legend=1 00:06:25.763 --rc geninfo_all_blocks=1 00:06:25.763 --rc geninfo_unexecuted_blocks=1 00:06:25.763 00:06:25.763 ' 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.763 --rc genhtml_branch_coverage=1 00:06:25.763 --rc genhtml_function_coverage=1 00:06:25.763 --rc genhtml_legend=1 00:06:25.763 --rc geninfo_all_blocks=1 00:06:25.763 --rc geninfo_unexecuted_blocks=1 00:06:25.763 00:06:25.763 ' 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.763 --rc genhtml_branch_coverage=1 00:06:25.763 --rc genhtml_function_coverage=1 00:06:25.763 --rc genhtml_legend=1 00:06:25.763 --rc geninfo_all_blocks=1 00:06:25.763 --rc geninfo_unexecuted_blocks=1 00:06:25.763 00:06:25.763 ' 00:06:25.763 10:38:25 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.763 --rc genhtml_branch_coverage=1 00:06:25.764 --rc genhtml_function_coverage=1 00:06:25.764 --rc genhtml_legend=1 00:06:25.764 --rc geninfo_all_blocks=1 00:06:25.764 --rc geninfo_unexecuted_blocks=1 00:06:25.764 00:06:25.764 ' 00:06:25.764 10:38:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:25.764 10:38:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71679 00:06:25.764 10:38:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71679 00:06:25.764 10:38:25 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71679 ']' 00:06:25.764 10:38:25 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.764 10:38:25 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.764 10:38:25 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.764 10:38:25 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.764 10:38:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.764 10:38:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.024 [2024-12-16 10:38:25.776734] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:26.024 [2024-12-16 10:38:25.776898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71679 ] 00:06:26.024 [2024-12-16 10:38:25.911733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.024 [2024-12-16 10:38:25.962198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:26.966 { 00:06:26.966 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:26.966 "fields": { 00:06:26.966 "major": 24, 00:06:26.966 "minor": 9, 00:06:26.966 "patch": 1, 00:06:26.966 "suffix": "-pre", 00:06:26.966 "commit": "b18e1bd62" 00:06:26.966 } 00:06:26.966 } 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:26.966 10:38:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:26.966 10:38:26 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.226 request: 00:06:27.226 { 00:06:27.226 "method": "env_dpdk_get_mem_stats", 00:06:27.226 "req_id": 1 00:06:27.226 } 00:06:27.226 Got JSON-RPC error response 00:06:27.226 response: 00:06:27.226 { 00:06:27.226 "code": -32601, 00:06:27.226 "message": "Method not found" 00:06:27.226 } 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.226 10:38:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71679 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71679 ']' 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71679 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71679 00:06:27.226 killing process with pid 71679 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71679' 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@969 -- # kill 71679 00:06:27.226 10:38:27 app_cmdline -- common/autotest_common.sh@974 -- # wait 71679 00:06:27.487 00:06:27.487 real 0m1.833s 00:06:27.487 user 0m2.162s 00:06:27.487 sys 0m0.446s 00:06:27.487 ************************************ 00:06:27.487 END TEST app_cmdline 00:06:27.487 ************************************ 00:06:27.487 10:38:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.487 10:38:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.487 10:38:27 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:27.487 10:38:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.487 10:38:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.487 10:38:27 -- common/autotest_common.sh@10 -- # set +x 00:06:27.487 ************************************ 00:06:27.487 START TEST version 00:06:27.487 ************************************ 00:06:27.487 10:38:27 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:27.747 * Looking for test storage... 00:06:27.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.747 10:38:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.747 10:38:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.747 10:38:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.747 10:38:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.747 10:38:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.747 10:38:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.747 10:38:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.747 10:38:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.747 10:38:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.747 10:38:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.747 10:38:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.747 10:38:27 version -- scripts/common.sh@344 -- # case "$op" in 00:06:27.747 10:38:27 version -- scripts/common.sh@345 -- # : 1 00:06:27.747 10:38:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.747 10:38:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.747 10:38:27 version -- scripts/common.sh@365 -- # decimal 1 00:06:27.747 10:38:27 version -- scripts/common.sh@353 -- # local d=1 00:06:27.747 10:38:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.747 10:38:27 version -- scripts/common.sh@355 -- # echo 1 00:06:27.747 10:38:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.747 10:38:27 version -- scripts/common.sh@366 -- # decimal 2 00:06:27.747 10:38:27 version -- scripts/common.sh@353 -- # local d=2 00:06:27.747 10:38:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.747 10:38:27 version -- scripts/common.sh@355 -- # echo 2 00:06:27.747 10:38:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.747 10:38:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.747 10:38:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.747 10:38:27 version -- scripts/common.sh@368 -- # return 0 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.747 --rc genhtml_branch_coverage=1 00:06:27.747 --rc genhtml_function_coverage=1 00:06:27.747 --rc genhtml_legend=1 00:06:27.747 --rc geninfo_all_blocks=1 00:06:27.747 --rc geninfo_unexecuted_blocks=1 00:06:27.747 00:06:27.747 ' 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.747 --rc genhtml_branch_coverage=1 00:06:27.747 --rc genhtml_function_coverage=1 00:06:27.747 --rc genhtml_legend=1 00:06:27.747 --rc geninfo_all_blocks=1 00:06:27.747 --rc geninfo_unexecuted_blocks=1 00:06:27.747 00:06:27.747 ' 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.747 --rc genhtml_branch_coverage=1 00:06:27.747 --rc genhtml_function_coverage=1 00:06:27.747 --rc genhtml_legend=1 00:06:27.747 --rc geninfo_all_blocks=1 00:06:27.747 --rc geninfo_unexecuted_blocks=1 00:06:27.747 00:06:27.747 ' 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.747 --rc genhtml_branch_coverage=1 00:06:27.747 --rc genhtml_function_coverage=1 00:06:27.747 --rc genhtml_legend=1 00:06:27.747 --rc geninfo_all_blocks=1 00:06:27.747 --rc geninfo_unexecuted_blocks=1 00:06:27.747 00:06:27.747 ' 00:06:27.747 10:38:27 version -- app/version.sh@17 -- # get_header_version major 00:06:27.747 10:38:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.747 10:38:27 version -- app/version.sh@14 -- # cut -f2 00:06:27.747 10:38:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:27.747 10:38:27 version -- app/version.sh@17 -- # major=24 00:06:27.747 10:38:27 version -- app/version.sh@18 -- # get_header_version minor 00:06:27.747 10:38:27 version -- app/version.sh@14 -- # cut -f2 00:06:27.747 10:38:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:27.747 10:38:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.747 10:38:27 version -- app/version.sh@18 -- # minor=9 00:06:27.747 10:38:27 version -- app/version.sh@19 -- # get_header_version patch 00:06:27.747 10:38:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:27.747 10:38:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.747 10:38:27 version -- app/version.sh@14 -- # cut -f2 00:06:27.747 10:38:27 version -- app/version.sh@19 -- # patch=1 00:06:27.747 10:38:27 version -- app/version.sh@20 -- # get_header_version suffix 00:06:27.747 10:38:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.747 10:38:27 version -- app/version.sh@14 -- # cut -f2 00:06:27.747 10:38:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:27.747 10:38:27 version -- app/version.sh@20 -- # suffix=-pre 00:06:27.747 10:38:27 version -- app/version.sh@22 -- # version=24.9 00:06:27.747 10:38:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:27.747 10:38:27 version -- app/version.sh@25 -- # version=24.9.1 00:06:27.747 10:38:27 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:27.747 10:38:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:27.747 10:38:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:27.747 10:38:27 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:27.747 10:38:27 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:27.747 ************************************ 00:06:27.747 END TEST version 00:06:27.747 ************************************ 00:06:27.747 00:06:27.747 real 0m0.194s 00:06:27.747 user 0m0.120s 00:06:27.747 sys 0m0.091s 00:06:27.747 10:38:27 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.747 10:38:27 version -- common/autotest_common.sh@10 -- # set +x 00:06:27.747 10:38:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:27.747 10:38:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:27.747 10:38:27 -- spdk/autotest.sh@194 -- # uname -s 00:06:27.747 10:38:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:27.747 10:38:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:27.747 10:38:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:27.747 10:38:27 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:27.747 10:38:27 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:27.747 10:38:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:27.747 10:38:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.747 10:38:27 -- common/autotest_common.sh@10 -- # set +x 00:06:27.747 ************************************ 00:06:27.747 START TEST blockdev_nvme 00:06:27.747 ************************************ 00:06:27.747 10:38:27 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:28.009 * Looking for test storage... 00:06:28.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.009 10:38:27 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.009 --rc genhtml_branch_coverage=1 00:06:28.009 --rc genhtml_function_coverage=1 00:06:28.009 --rc genhtml_legend=1 00:06:28.009 --rc geninfo_all_blocks=1 00:06:28.009 --rc geninfo_unexecuted_blocks=1 00:06:28.009 00:06:28.009 ' 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.009 --rc genhtml_branch_coverage=1 00:06:28.009 --rc genhtml_function_coverage=1 00:06:28.009 --rc genhtml_legend=1 00:06:28.009 --rc geninfo_all_blocks=1 00:06:28.009 --rc geninfo_unexecuted_blocks=1 00:06:28.009 00:06:28.009 ' 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.009 --rc genhtml_branch_coverage=1 00:06:28.009 --rc genhtml_function_coverage=1 00:06:28.009 --rc genhtml_legend=1 00:06:28.009 --rc geninfo_all_blocks=1 00:06:28.009 --rc geninfo_unexecuted_blocks=1 00:06:28.009 00:06:28.009 ' 00:06:28.009 10:38:27 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.009 --rc genhtml_branch_coverage=1 00:06:28.009 --rc genhtml_function_coverage=1 00:06:28.009 --rc genhtml_legend=1 00:06:28.009 --rc geninfo_all_blocks=1 00:06:28.009 --rc geninfo_unexecuted_blocks=1 00:06:28.009 00:06:28.009 ' 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.009 10:38:27 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:28.009 10:38:27 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71841 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71841 00:06:28.010 10:38:27 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 71841 ']' 00:06:28.010 10:38:27 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.010 10:38:27 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.010 10:38:27 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.010 10:38:27 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.010 10:38:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:28.010 10:38:27 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:28.010 [2024-12-16 10:38:27.898421] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:28.010 [2024-12-16 10:38:27.898676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71841 ] 00:06:28.269 [2024-12-16 10:38:28.033812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.269 [2024-12-16 10:38:28.069084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.841 10:38:28 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.841 10:38:28 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:28.841 10:38:28 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:28.841 10:38:28 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:28.841 10:38:28 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:28.841 10:38:28 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:28.841 10:38:28 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:28.841 10:38:28 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:28.841 10:38:28 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.841 10:38:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.102 10:38:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.102 10:38:29 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:29.102 10:38:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.102 10:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.102 10:38:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.102 10:38:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:29.102 10:38:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:29.103 10:38:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.103 10:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.103 10:38:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.103 10:38:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:29.103 10:38:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.103 10:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.103 10:38:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.103 10:38:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:29.103 10:38:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.103 10:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.364 10:38:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.364 10:38:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:29.364 10:38:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:29.364 10:38:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:29.364 10:38:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.364 10:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.364 10:38:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.364 10:38:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:29.364 10:38:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:29.365 10:38:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "edc21950-12ca-467c-8cce-4eaaa529a884"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "edc21950-12ca-467c-8cce-4eaaa529a884",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "cb92e1ec-d50e-47b9-a84a-17175e8c6409"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cb92e1ec-d50e-47b9-a84a-17175e8c6409",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "78848336-eb41-42a2-938b-a6beb9807b5b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78848336-eb41-42a2-938b-a6beb9807b5b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9932a56b-1831-457c-a061-46e2d9a803dc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9932a56b-1831-457c-a061-46e2d9a803dc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "55e7135f-a13e-4b15-86b1-469bf232ba6e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "55e7135f-a13e-4b15-86b1-469bf232ba6e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e95223a1-d892-470b-831e-d0cd5de18a80"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e95223a1-d892-470b-831e-d0cd5de18a80",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:29.365 10:38:29 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:29.365 10:38:29 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:29.365 10:38:29 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:29.365 10:38:29 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71841 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 71841 ']' 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 71841 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71841 00:06:29.365 killing process with pid 71841 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71841' 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 71841 00:06:29.365 10:38:29 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 71841 00:06:29.626 10:38:29 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:29.626 10:38:29 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:29.626 10:38:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:29.626 10:38:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.626 10:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.626 ************************************ 00:06:29.626 START TEST bdev_hello_world 00:06:29.626 ************************************ 00:06:29.626 10:38:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:29.626 [2024-12-16 10:38:29.543206] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:29.626 [2024-12-16 10:38:29.543447] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71913 ] 00:06:29.887 [2024-12-16 10:38:29.678195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.887 [2024-12-16 10:38:29.710365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.148 [2024-12-16 10:38:30.078232] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:30.148 [2024-12-16 10:38:30.078280] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:30.148 [2024-12-16 10:38:30.078303] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:30.148 [2024-12-16 10:38:30.080342] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:30.148 [2024-12-16 10:38:30.080872] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:30.148 [2024-12-16 10:38:30.080902] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:30.148 [2024-12-16 10:38:30.081192] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:30.148 00:06:30.148 [2024-12-16 10:38:30.081219] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:30.409 ************************************ 00:06:30.409 END TEST bdev_hello_world 00:06:30.409 ************************************ 00:06:30.409 00:06:30.409 real 0m0.747s 00:06:30.409 user 0m0.487s 00:06:30.409 sys 0m0.157s 00:06:30.409 10:38:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.409 10:38:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:30.409 10:38:30 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:30.409 10:38:30 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:30.409 10:38:30 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.409 10:38:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.409 ************************************ 00:06:30.409 START TEST bdev_bounds 00:06:30.409 ************************************ 00:06:30.409 Process bdevio pid: 71944 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71944 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71944' 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71944 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71944 ']' 00:06:30.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:30.409 10:38:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:30.409 [2024-12-16 10:38:30.349029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:30.409 [2024-12-16 10:38:30.349290] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71944 ] 00:06:30.671 [2024-12-16 10:38:30.485723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.671 [2024-12-16 10:38:30.519980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.671 [2024-12-16 10:38:30.520030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.671 [2024-12-16 10:38:30.520063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.242 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.242 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:31.242 10:38:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:31.504 I/O targets: 00:06:31.504 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:31.504 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:31.504 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:31.504 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:31.504 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:31.504 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:31.504 00:06:31.504 00:06:31.504 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.504 http://cunit.sourceforge.net/ 00:06:31.504 00:06:31.504 00:06:31.504 Suite: bdevio tests on: Nvme3n1 00:06:31.504 Test: blockdev write read block ...passed 00:06:31.504 Test: blockdev write zeroes read block ...passed 00:06:31.504 Test: blockdev write zeroes read no split ...passed 00:06:31.504 Test: blockdev write zeroes read split ...passed 00:06:31.504 Test: blockdev write zeroes read split partial ...passed 00:06:31.504 Test: blockdev reset ...[2024-12-16 10:38:31.311972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:31.504 [2024-12-16 10:38:31.316059] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:31.504 passed 00:06:31.504 Test: blockdev write read 8 blocks ...passed 00:06:31.504 Test: blockdev write read size > 128k ...passed 00:06:31.504 Test: blockdev write read invalid size ...passed 00:06:31.504 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.504 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.504 Test: blockdev write read max offset ...passed 00:06:31.504 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.504 Test: blockdev writev readv 8 blocks ...passed 00:06:31.504 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.504 Test: blockdev writev readv block ...passed 00:06:31.504 Test: blockdev writev readv size > 128k ...passed 00:06:31.504 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.504 Test: blockdev comparev and writev ...[2024-12-16 10:38:31.332491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e0a000 len:0x1000 00:06:31.504 [2024-12-16 10:38:31.332539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.504 passed 00:06:31.504 Test: blockdev nvme passthru rw ...passed 00:06:31.504 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:38:31.335156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:31.504 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:31.504 [2024-12-16 10:38:31.335269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:31.504 passed 00:06:31.504 Test: blockdev copy ...passed 00:06:31.504 Suite: bdevio tests on: Nvme2n3 00:06:31.504 Test: blockdev write read block ...passed 00:06:31.504 Test: blockdev write zeroes read block ...passed 00:06:31.504 Test: blockdev write zeroes read no split ...passed 00:06:31.504 Test: blockdev write zeroes read split ...passed 00:06:31.504 Test: blockdev write zeroes read split partial ...passed 00:06:31.504 Test: blockdev reset ...[2024-12-16 10:38:31.362412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:31.504 [2024-12-16 10:38:31.365721] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:31.504 passed 00:06:31.504 Test: blockdev write read 8 blocks ...passed 00:06:31.504 Test: blockdev write read size > 128k ...passed 00:06:31.504 Test: blockdev write read invalid size ...passed 00:06:31.504 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.504 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.504 Test: blockdev write read max offset ...passed 00:06:31.504 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.504 Test: blockdev writev readv 8 blocks ...passed 00:06:31.504 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.504 Test: blockdev writev readv block ...passed 00:06:31.504 Test: blockdev writev readv size > 128k ...passed 00:06:31.504 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.504 Test: blockdev comparev and writev ...[2024-12-16 10:38:31.381661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:06:31.504 [2024-12-16 10:38:31.381710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.504 passed 00:06:31.504 Test: blockdev nvme passthru rw ...passed 00:06:31.504 Test: blockdev nvme passthru vendor specific ...passed 00:06:31.504 Test: blockdev nvme admin passthru ...[2024-12-16 10:38:31.384181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:31.504 [2024-12-16 10:38:31.384218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:31.504 passed 00:06:31.504 Test: blockdev copy ...passed 00:06:31.504 Suite: bdevio tests on: Nvme2n2 00:06:31.504 Test: blockdev write read block ...passed 00:06:31.504 Test: blockdev write zeroes read block ...passed 00:06:31.504 Test: blockdev write zeroes read no split ...passed 00:06:31.504 Test: blockdev write zeroes read split ...passed 00:06:31.504 Test: blockdev write zeroes read split partial ...passed 00:06:31.504 Test: blockdev reset ...[2024-12-16 10:38:31.412612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:31.504 [2024-12-16 10:38:31.415674] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:31.504 passed 00:06:31.504 Test: blockdev write read 8 blocks ...passed 00:06:31.504 Test: blockdev write read size > 128k ...passed 00:06:31.504 Test: blockdev write read invalid size ...passed 00:06:31.504 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.504 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.504 Test: blockdev write read max offset ...passed 00:06:31.504 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.504 Test: blockdev writev readv 8 blocks ...passed 00:06:31.504 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.504 Test: blockdev writev readv block ...passed 00:06:31.504 Test: blockdev writev readv size > 128k ...passed 00:06:31.504 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.504 Test: blockdev comparev and writev ...[2024-12-16 10:38:31.430131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:31.504 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:06:31.504 [2024-12-16 10:38:31.430264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.504 passed 00:06:31.504 Test: blockdev nvme passthru vendor specific ...passed 00:06:31.504 Test: blockdev nvme admin passthru ...[2024-12-16 10:38:31.432387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:31.504 [2024-12-16 10:38:31.432421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:31.504 passed 00:06:31.504 Test: blockdev copy ...passed 00:06:31.504 Suite: bdevio tests on: Nvme2n1 00:06:31.504 Test: blockdev write read block ...passed 00:06:31.504 Test: blockdev write zeroes read block ...passed 00:06:31.504 Test: blockdev write zeroes read no split ...passed 00:06:31.504 Test: blockdev write zeroes read split ...passed 00:06:31.504 Test: blockdev write zeroes read split partial ...passed 00:06:31.504 Test: blockdev reset ...[2024-12-16 10:38:31.451160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:31.504 [2024-12-16 10:38:31.454104] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:31.504 passed 00:06:31.504 Test: blockdev write read 8 blocks ...passed 00:06:31.504 Test: blockdev write read size > 128k ...passed 00:06:31.504 Test: blockdev write read invalid size ...passed 00:06:31.504 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.504 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.504 Test: blockdev write read max offset ...passed 00:06:31.504 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.504 Test: blockdev writev readv 8 blocks ...passed 00:06:31.504 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.504 Test: blockdev writev readv block ...passed 00:06:31.504 Test: blockdev writev readv size > 128k ...passed 00:06:31.504 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.504 Test: blockdev comparev and writev ...[2024-12-16 10:38:31.468822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:06:31.505 [2024-12-16 10:38:31.468863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.505 passed 00:06:31.505 Test: blockdev nvme passthru rw ...passed 00:06:31.505 Test: blockdev nvme passthru vendor specific ...passed 00:06:31.505 Test: blockdev nvme admin passthru ...[2024-12-16 10:38:31.471138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:31.505 [2024-12-16 10:38:31.471170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:31.505 passed 00:06:31.505 Test: blockdev copy ...passed 00:06:31.505 Suite: bdevio tests on: Nvme1n1 00:06:31.505 Test: blockdev write read block ...passed 00:06:31.505 Test: blockdev write zeroes read block ...passed 00:06:31.766 Test: blockdev write zeroes read no split ...passed 00:06:31.766 Test: blockdev write zeroes read split ...passed 00:06:31.766 Test: blockdev write zeroes read split partial ...passed 00:06:31.766 Test: blockdev reset ...[2024-12-16 10:38:31.500821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:31.766 [2024-12-16 10:38:31.503257] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:31.766 passed 00:06:31.766 Test: blockdev write read 8 blocks ...passed 00:06:31.766 Test: blockdev write read size > 128k ...passed 00:06:31.766 Test: blockdev write read invalid size ...passed 00:06:31.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.766 Test: blockdev write read max offset ...passed 00:06:31.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.766 Test: blockdev writev readv 8 blocks ...passed 00:06:31.766 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.766 Test: blockdev writev readv block ...passed 00:06:31.766 Test: blockdev writev readv size > 128k ...passed 00:06:31.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.766 Test: blockdev comparev and writev ...[2024-12-16 10:38:31.518751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8236000 len:0x1000 00:06:31.766 [2024-12-16 10:38:31.518791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.766 passed 00:06:31.766 Test: blockdev nvme passthru rw ...passed 00:06:31.766 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:38:31.520871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:31.766 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:31.766 [2024-12-16 10:38:31.520993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:31.766 passed 00:06:31.766 Test: blockdev copy ...passed 00:06:31.766 Suite: bdevio tests on: Nvme0n1 00:06:31.766 Test: blockdev write read block ...passed 00:06:31.766 Test: blockdev write zeroes read block ...passed 00:06:31.766 Test: blockdev write zeroes read no split ...passed 00:06:31.766 Test: blockdev write zeroes read split ...passed 00:06:31.766 Test: blockdev write zeroes read split partial ...passed 00:06:31.766 Test: blockdev reset ...[2024-12-16 10:38:31.551423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:31.766 passed 00:06:31.766 Test: blockdev write read 8 blocks ...[2024-12-16 10:38:31.553991] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:31.766 passed 00:06:31.766 Test: blockdev write read size > 128k ...passed 00:06:31.766 Test: blockdev write read invalid size ...passed 00:06:31.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.766 Test: blockdev write read max offset ...passed 00:06:31.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.766 Test: blockdev writev readv 8 blocks ...passed 00:06:31.766 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.766 Test: blockdev writev readv block ...passed 00:06:31.766 Test: blockdev writev readv size > 128k ...passed 00:06:31.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.766 Test: blockdev comparev and writev ...passed 00:06:31.766 Test: blockdev nvme passthru rw ...[2024-12-16 10:38:31.560610] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:31.766 separate metadata which is not supported yet. 00:06:31.766 passed 00:06:31.766 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:38:31.561520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:31.766 [2024-12-16 10:38:31.561551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:31.766 passed 00:06:31.766 Test: blockdev nvme admin passthru ...passed 00:06:31.766 Test: blockdev copy ...passed 00:06:31.766 00:06:31.766 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.766 suites 6 6 n/a 0 0 00:06:31.766 tests 138 138 138 0 0 00:06:31.766 asserts 893 893 893 0 n/a 00:06:31.766 00:06:31.766 Elapsed time = 0.621 seconds 00:06:31.766 0 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71944 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71944 ']' 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71944 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71944 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.766 killing process with pid 71944 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71944' 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71944 00:06:31.766 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71944 00:06:32.027 10:38:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:32.027 00:06:32.027 real 0m1.503s 00:06:32.027 user 0m3.755s 00:06:32.027 sys 0m0.268s 00:06:32.027 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.027 ************************************ 00:06:32.027 END TEST bdev_bounds 00:06:32.027 10:38:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:32.027 ************************************ 00:06:32.027 10:38:31 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:32.027 10:38:31 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:32.027 10:38:31 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.027 10:38:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.027 ************************************ 00:06:32.027 START TEST bdev_nbd 00:06:32.027 ************************************ 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:32.027 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71987 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71987 /var/tmp/spdk-nbd.sock 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71987 ']' 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:32.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.028 10:38:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:32.028 [2024-12-16 10:38:31.932434] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:32.028 [2024-12-16 10:38:31.932572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.289 [2024-12-16 10:38:32.072978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.289 [2024-12-16 10:38:32.125969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.863 10:38:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.124 1+0 records in 00:06:33.124 1+0 records out 00:06:33.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000953053 s, 4.3 MB/s 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.124 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.404 1+0 records in 00:06:33.404 1+0 records out 00:06:33.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000980436 s, 4.2 MB/s 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.404 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.674 1+0 records in 00:06:33.674 1+0 records out 00:06:33.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134296 s, 3.0 MB/s 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.674 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.936 1+0 records in 00:06:33.936 1+0 records out 00:06:33.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797219 s, 5.1 MB/s 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.936 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.937 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.937 10:38:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.199 1+0 records in 00:06:34.199 1+0 records out 00:06:34.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768859 s, 5.3 MB/s 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:34.199 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.460 1+0 records in 00:06:34.460 1+0 records out 00:06:34.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105142 s, 3.9 MB/s 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:34.460 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd0", 00:06:34.721 "bdev_name": "Nvme0n1" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd1", 00:06:34.721 "bdev_name": "Nvme1n1" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd2", 00:06:34.721 "bdev_name": "Nvme2n1" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd3", 00:06:34.721 "bdev_name": "Nvme2n2" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd4", 00:06:34.721 "bdev_name": "Nvme2n3" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd5", 00:06:34.721 "bdev_name": "Nvme3n1" 00:06:34.721 } 00:06:34.721 ]' 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd0", 00:06:34.721 "bdev_name": "Nvme0n1" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd1", 00:06:34.721 "bdev_name": "Nvme1n1" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd2", 00:06:34.721 "bdev_name": "Nvme2n1" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd3", 00:06:34.721 "bdev_name": "Nvme2n2" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd4", 00:06:34.721 "bdev_name": "Nvme2n3" 00:06:34.721 }, 00:06:34.721 { 00:06:34.721 "nbd_device": "/dev/nbd5", 00:06:34.721 "bdev_name": "Nvme3n1" 00:06:34.721 } 00:06:34.721 ]' 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.721 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.983 10:38:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.244 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:35.509 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:35.509 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:35.509 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:35.509 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.509 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.509 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:35.509 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.510 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:35.770 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.771 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.032 10:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.293 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:36.555 /dev/nbd0 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.555 1+0 records in 00:06:36.555 1+0 records out 00:06:36.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000925338 s, 4.4 MB/s 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.555 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:36.817 /dev/nbd1 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.817 1+0 records in 00:06:36.817 1+0 records out 00:06:36.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129509 s, 3.2 MB/s 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.817 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:37.079 /dev/nbd10 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.079 1+0 records in 00:06:37.079 1+0 records out 00:06:37.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0015613 s, 2.6 MB/s 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.079 10:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:37.341 /dev/nbd11 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.341 1+0 records in 00:06:37.341 1+0 records out 00:06:37.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112256 s, 3.6 MB/s 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.341 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:37.602 /dev/nbd12 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.602 1+0 records in 00:06:37.602 1+0 records out 00:06:37.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937218 s, 4.4 MB/s 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.602 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:37.863 /dev/nbd13 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.863 1+0 records in 00:06:37.863 1+0 records out 00:06:37.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109737 s, 3.7 MB/s 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.863 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd0", 00:06:38.125 "bdev_name": "Nvme0n1" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd1", 00:06:38.125 "bdev_name": "Nvme1n1" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd10", 00:06:38.125 "bdev_name": "Nvme2n1" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd11", 00:06:38.125 "bdev_name": "Nvme2n2" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd12", 00:06:38.125 "bdev_name": "Nvme2n3" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd13", 00:06:38.125 "bdev_name": "Nvme3n1" 00:06:38.125 } 00:06:38.125 ]' 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd0", 00:06:38.125 "bdev_name": "Nvme0n1" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd1", 00:06:38.125 "bdev_name": "Nvme1n1" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd10", 00:06:38.125 "bdev_name": "Nvme2n1" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd11", 00:06:38.125 "bdev_name": "Nvme2n2" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd12", 00:06:38.125 "bdev_name": "Nvme2n3" 00:06:38.125 }, 00:06:38.125 { 00:06:38.125 "nbd_device": "/dev/nbd13", 00:06:38.125 "bdev_name": "Nvme3n1" 00:06:38.125 } 00:06:38.125 ]' 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.125 /dev/nbd1 00:06:38.125 /dev/nbd10 00:06:38.125 /dev/nbd11 00:06:38.125 /dev/nbd12 00:06:38.125 /dev/nbd13' 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.125 /dev/nbd1 00:06:38.125 /dev/nbd10 00:06:38.125 /dev/nbd11 00:06:38.125 /dev/nbd12 00:06:38.125 /dev/nbd13' 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:38.125 256+0 records in 00:06:38.125 256+0 records out 00:06:38.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426825 s, 246 MB/s 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.125 10:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.387 256+0 records in 00:06:38.387 256+0 records out 00:06:38.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.21781 s, 4.8 MB/s 00:06:38.387 10:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.387 10:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.648 256+0 records in 00:06:38.648 256+0 records out 00:06:38.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257813 s, 4.1 MB/s 00:06:38.648 10:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.648 10:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:38.910 256+0 records in 00:06:38.910 256+0 records out 00:06:38.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.256624 s, 4.1 MB/s 00:06:38.910 10:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.910 10:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:39.172 256+0 records in 00:06:39.172 256+0 records out 00:06:39.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246986 s, 4.2 MB/s 00:06:39.172 10:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.172 10:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:39.434 256+0 records in 00:06:39.434 256+0 records out 00:06:39.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24872 s, 4.2 MB/s 00:06:39.434 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.434 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:39.696 256+0 records in 00:06:39.696 256+0 records out 00:06:39.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251229 s, 4.2 MB/s 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.696 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.958 10:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.217 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:40.476 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.735 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.001 10:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:41.261 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:41.519 malloc_lvol_verify 00:06:41.519 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:41.778 c79a15bc-f37b-400a-9366-1b8a2fc545e6 00:06:41.778 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:41.778 38441dd8-4ad7-423f-8c1a-1aa4e9ec1554 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:42.037 /dev/nbd0 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:42.037 mke2fs 1.47.0 (5-Feb-2023) 00:06:42.037 Discarding device blocks: 0/4096 done 00:06:42.037 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:42.037 00:06:42.037 Allocating group tables: 0/1 done 00:06:42.037 Writing inode tables: 0/1 done 00:06:42.037 Creating journal (1024 blocks): done 00:06:42.037 Writing superblocks and filesystem accounting information: 0/1 done 00:06:42.037 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:42.037 10:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.037 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71987 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71987 ']' 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71987 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71987 00:06:42.296 killing process with pid 71987 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71987' 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71987 00:06:42.296 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71987 00:06:42.556 ************************************ 00:06:42.556 END TEST bdev_nbd 00:06:42.556 ************************************ 00:06:42.556 10:38:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:42.556 00:06:42.556 real 0m10.561s 00:06:42.556 user 0m14.527s 00:06:42.556 sys 0m3.696s 00:06:42.556 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.556 10:38:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:42.556 10:38:42 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:42.556 10:38:42 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:42.556 skipping fio tests on NVMe due to multi-ns failures. 00:06:42.556 10:38:42 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:42.556 10:38:42 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:42.556 10:38:42 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:42.556 10:38:42 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:42.556 10:38:42 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.556 10:38:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.556 ************************************ 00:06:42.556 START TEST bdev_verify 00:06:42.556 ************************************ 00:06:42.556 10:38:42 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:42.816 [2024-12-16 10:38:42.549801] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.816 [2024-12-16 10:38:42.549949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72377 ] 00:06:42.816 [2024-12-16 10:38:42.684289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.816 [2024-12-16 10:38:42.718437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.816 [2024-12-16 10:38:42.718534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.442 Running I/O for 5 seconds... 00:06:45.766 24256.00 IOPS, 94.75 MiB/s [2024-12-16T10:38:46.325Z] 23168.00 IOPS, 90.50 MiB/s [2024-12-16T10:38:47.710Z] 21802.67 IOPS, 85.17 MiB/s [2024-12-16T10:38:48.276Z] 21120.00 IOPS, 82.50 MiB/s [2024-12-16T10:38:48.276Z] 21248.00 IOPS, 83.00 MiB/s 00:06:48.287 Latency(us) 00:06:48.287 [2024-12-16T10:38:48.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:48.287 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x0 length 0xbd0bd 00:06:48.287 Nvme0n1 : 5.06 1797.49 7.02 0.00 0.00 71058.73 10183.29 75013.51 00:06:48.287 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:48.287 Nvme0n1 : 5.05 1722.02 6.73 0.00 0.00 74177.28 8116.38 69367.34 00:06:48.287 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x0 length 0xa0000 00:06:48.287 Nvme1n1 : 5.06 1796.99 7.02 0.00 0.00 70993.17 11494.01 66544.25 00:06:48.287 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0xa0000 length 0xa0000 00:06:48.287 Nvme1n1 : 5.06 1720.93 6.72 0.00 0.00 74059.02 10284.11 67350.84 00:06:48.287 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x0 length 0x80000 00:06:48.287 Nvme2n1 : 5.06 1795.24 7.01 0.00 0.00 70925.39 12754.31 68560.74 00:06:48.287 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x80000 length 0x80000 00:06:48.287 Nvme2n1 : 5.06 1719.86 6.72 0.00 0.00 73935.22 11645.24 69770.63 00:06:48.287 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x0 length 0x80000 00:06:48.287 Nvme2n2 : 5.07 1794.16 7.01 0.00 0.00 70838.31 13611.32 74206.92 00:06:48.287 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x80000 length 0x80000 00:06:48.287 Nvme2n2 : 5.06 1719.28 6.72 0.00 0.00 73815.10 10939.47 68964.04 00:06:48.287 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x0 length 0x80000 00:06:48.287 Nvme2n3 : 5.07 1793.04 7.00 0.00 0.00 70733.85 12754.31 77030.01 00:06:48.287 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x80000 length 0x80000 00:06:48.287 Nvme2n3 : 5.07 1718.23 6.71 0.00 0.00 73712.19 10132.87 69770.63 00:06:48.287 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x0 length 0x20000 00:06:48.287 Nvme3n1 : 5.07 1792.06 7.00 0.00 0.00 70629.75 8570.09 77030.01 00:06:48.287 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.287 Verification LBA range: start 0x20000 length 0x20000 00:06:48.287 Nvme3n1 : 5.07 1717.14 6.71 0.00 0.00 73631.37 7561.85 69770.63 00:06:48.287 [2024-12-16T10:38:48.276Z] =================================================================================================================== 00:06:48.287 [2024-12-16T10:38:48.276Z] Total : 21086.43 82.37 0.00 0.00 72343.14 7561.85 77030.01 00:06:48.853 00:06:48.853 real 0m6.297s 00:06:48.853 user 0m11.910s 00:06:48.853 sys 0m0.187s 00:06:48.853 10:38:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.853 10:38:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:48.853 ************************************ 00:06:48.853 END TEST bdev_verify 00:06:48.853 ************************************ 00:06:48.853 10:38:48 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:48.853 10:38:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:48.853 10:38:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.853 10:38:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.853 ************************************ 00:06:48.853 START TEST bdev_verify_big_io 00:06:48.853 ************************************ 00:06:48.853 10:38:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:49.112 [2024-12-16 10:38:48.895752] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:49.112 [2024-12-16 10:38:48.895873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72464 ] 00:06:49.112 [2024-12-16 10:38:49.029462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.112 [2024-12-16 10:38:49.063268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.112 [2024-12-16 10:38:49.063310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.682 Running I/O for 5 seconds... 00:06:53.560 55.00 IOPS, 3.44 MiB/s [2024-12-16T10:38:55.453Z] 1589.00 IOPS, 99.31 MiB/s [2024-12-16T10:38:55.713Z] 2394.33 IOPS, 149.65 MiB/s [2024-12-16T10:38:55.713Z] 2406.00 IOPS, 150.38 MiB/s 00:06:55.724 Latency(us) 00:06:55.724 [2024-12-16T10:38:55.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.724 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x0 length 0xbd0b 00:06:55.724 Nvme0n1 : 5.74 129.70 8.11 0.00 0.00 937225.29 26416.05 1103424.59 00:06:55.724 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:55.724 Nvme0n1 : 5.61 136.87 8.55 0.00 0.00 902442.01 83886.08 1077613.49 00:06:55.724 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x0 length 0xa000 00:06:55.724 Nvme1n1 : 5.75 133.62 8.35 0.00 0.00 892051.04 100018.02 884030.23 00:06:55.724 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0xa000 length 0xa000 00:06:55.724 Nvme1n1 : 5.73 138.28 8.64 0.00 0.00 860758.07 112923.57 890483.00 00:06:55.724 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x0 length 0x8000 00:06:55.724 Nvme2n1 : 5.83 135.30 8.46 0.00 0.00 849425.53 79449.80 764653.88 00:06:55.724 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x8000 length 0x8000 00:06:55.724 Nvme2n1 : 5.80 143.44 8.96 0.00 0.00 808264.65 70173.93 774333.05 00:06:55.724 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x0 length 0x8000 00:06:55.724 Nvme2n2 : 5.91 140.68 8.79 0.00 0.00 794486.32 72997.02 774333.05 00:06:55.724 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x8000 length 0x8000 00:06:55.724 Nvme2n2 : 5.91 147.84 9.24 0.00 0.00 759912.13 85902.57 793691.37 00:06:55.724 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x0 length 0x8000 00:06:55.724 Nvme2n3 : 6.00 148.61 9.29 0.00 0.00 732833.54 54041.99 909841.33 00:06:55.724 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x8000 length 0x8000 00:06:55.724 Nvme2n3 : 5.94 154.88 9.68 0.00 0.00 706176.92 24500.38 816276.09 00:06:55.724 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x0 length 0x2000 00:06:55.724 Nvme3n1 : 6.01 152.27 9.52 0.00 0.00 697597.39 2142.52 1677721.60 00:06:55.724 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:55.724 Verification LBA range: start 0x2000 length 0x2000 00:06:55.724 Nvme3n1 : 6.01 170.38 10.65 0.00 0.00 624853.56 957.83 735616.39 00:06:55.724 [2024-12-16T10:38:55.713Z] =================================================================================================================== 00:06:55.724 [2024-12-16T10:38:55.713Z] Total : 1731.86 108.24 0.00 0.00 788837.45 957.83 1677721.60 00:06:56.671 00:06:56.671 real 0m7.794s 00:06:56.671 user 0m14.887s 00:06:56.671 sys 0m0.201s 00:06:56.671 10:38:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.671 10:38:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:56.671 ************************************ 00:06:56.671 END TEST bdev_verify_big_io 00:06:56.671 ************************************ 00:06:56.934 10:38:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:56.934 10:38:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:56.934 10:38:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.934 10:38:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.934 ************************************ 00:06:56.934 START TEST bdev_write_zeroes 00:06:56.934 ************************************ 00:06:56.934 10:38:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:56.934 [2024-12-16 10:38:56.743549] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:56.934 [2024-12-16 10:38:56.743659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72564 ] 00:06:56.934 [2024-12-16 10:38:56.877848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.934 [2024-12-16 10:38:56.910167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.505 Running I/O for 1 seconds... 00:06:58.446 58703.00 IOPS, 229.31 MiB/s 00:06:58.446 Latency(us) 00:06:58.446 [2024-12-16T10:38:58.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.446 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.446 Nvme0n1 : 1.02 9737.43 38.04 0.00 0.00 13115.75 5696.59 23088.84 00:06:58.446 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.446 Nvme1n1 : 1.02 9752.29 38.09 0.00 0.00 13081.69 7612.26 22181.42 00:06:58.446 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.446 Nvme2n1 : 1.02 9740.92 38.05 0.00 0.00 13055.49 6377.16 20971.52 00:06:58.446 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.446 Nvme2n2 : 1.03 9708.10 37.92 0.00 0.00 13076.65 6200.71 20366.57 00:06:58.446 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.446 Nvme2n3 : 1.03 9718.24 37.96 0.00 0.00 13045.27 6452.78 20568.22 00:06:58.446 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:58.446 Nvme3n1 : 1.03 9706.81 37.92 0.00 0.00 13043.22 7461.02 22383.06 00:06:58.446 [2024-12-16T10:38:58.435Z] =================================================================================================================== 00:06:58.446 [2024-12-16T10:38:58.435Z] Total : 58363.79 227.98 0.00 0.00 13069.66 5696.59 23088.84 00:06:58.708 00:06:58.708 real 0m1.821s 00:06:58.708 user 0m1.539s 00:06:58.708 sys 0m0.171s 00:06:58.708 10:38:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.708 ************************************ 00:06:58.708 END TEST bdev_write_zeroes 00:06:58.708 ************************************ 00:06:58.708 10:38:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:58.708 10:38:58 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.708 10:38:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:58.708 10:38:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.708 10:38:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.708 ************************************ 00:06:58.708 START TEST bdev_json_nonenclosed 00:06:58.708 ************************************ 00:06:58.708 10:38:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.708 [2024-12-16 10:38:58.608809] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.708 [2024-12-16 10:38:58.608944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72602 ] 00:06:58.970 [2024-12-16 10:38:58.745369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.970 [2024-12-16 10:38:58.778045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.970 [2024-12-16 10:38:58.778131] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:58.970 [2024-12-16 10:38:58.778146] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:58.970 [2024-12-16 10:38:58.778156] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.970 00:06:58.970 real 0m0.304s 00:06:58.970 user 0m0.121s 00:06:58.970 sys 0m0.079s 00:06:58.970 10:38:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.970 10:38:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:58.970 ************************************ 00:06:58.970 END TEST bdev_json_nonenclosed 00:06:58.970 ************************************ 00:06:58.970 10:38:58 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.970 10:38:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:58.970 10:38:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.970 10:38:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.970 ************************************ 00:06:58.970 START TEST bdev_json_nonarray 00:06:58.970 ************************************ 00:06:58.970 10:38:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.231 [2024-12-16 10:38:58.976513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:59.231 [2024-12-16 10:38:58.976626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72626 ] 00:06:59.231 [2024-12-16 10:38:59.112947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.231 [2024-12-16 10:38:59.145471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.231 [2024-12-16 10:38:59.145562] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:59.231 [2024-12-16 10:38:59.145580] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:59.231 [2024-12-16 10:38:59.145594] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.492 00:06:59.492 real 0m0.309s 00:06:59.492 user 0m0.110s 00:06:59.492 sys 0m0.096s 00:06:59.492 10:38:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.492 10:38:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:59.492 ************************************ 00:06:59.492 END TEST bdev_json_nonarray 00:06:59.492 ************************************ 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:59.492 10:38:59 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:59.492 00:06:59.492 real 0m31.606s 00:06:59.492 user 0m49.232s 00:06:59.492 sys 0m5.575s 00:06:59.492 10:38:59 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.492 10:38:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.492 ************************************ 00:06:59.492 END TEST blockdev_nvme 00:06:59.492 ************************************ 00:06:59.492 10:38:59 -- spdk/autotest.sh@209 -- # uname -s 00:06:59.492 10:38:59 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:59.492 10:38:59 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:59.492 10:38:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.492 10:38:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.492 10:38:59 -- common/autotest_common.sh@10 -- # set +x 00:06:59.492 ************************************ 00:06:59.492 START TEST blockdev_nvme_gpt 00:06:59.492 ************************************ 00:06:59.492 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:59.492 * Looking for test storage... 00:06:59.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:59.492 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:59.492 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:06:59.492 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:59.492 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.492 10:38:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:59.492 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.492 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:59.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.492 --rc genhtml_branch_coverage=1 00:06:59.492 --rc genhtml_function_coverage=1 00:06:59.492 --rc genhtml_legend=1 00:06:59.492 --rc geninfo_all_blocks=1 00:06:59.492 --rc geninfo_unexecuted_blocks=1 00:06:59.492 00:06:59.492 ' 00:06:59.493 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:59.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.493 --rc genhtml_branch_coverage=1 00:06:59.493 --rc genhtml_function_coverage=1 00:06:59.493 --rc genhtml_legend=1 00:06:59.493 --rc geninfo_all_blocks=1 00:06:59.493 --rc geninfo_unexecuted_blocks=1 00:06:59.493 00:06:59.493 ' 00:06:59.493 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:59.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.493 --rc genhtml_branch_coverage=1 00:06:59.493 --rc genhtml_function_coverage=1 00:06:59.493 --rc genhtml_legend=1 00:06:59.493 --rc geninfo_all_blocks=1 00:06:59.493 --rc geninfo_unexecuted_blocks=1 00:06:59.493 00:06:59.493 ' 00:06:59.754 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:59.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.754 --rc genhtml_branch_coverage=1 00:06:59.754 --rc genhtml_function_coverage=1 00:06:59.754 --rc genhtml_legend=1 00:06:59.754 --rc geninfo_all_blocks=1 00:06:59.754 --rc geninfo_unexecuted_blocks=1 00:06:59.754 00:06:59.754 ' 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:59.754 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:59.755 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72699 00:06:59.755 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:59.755 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72699 00:06:59.755 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 72699 ']' 00:06:59.755 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.755 10:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:59.755 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.755 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.755 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.755 10:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.755 [2024-12-16 10:38:59.558603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:59.755 [2024-12-16 10:38:59.558716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72699 ] 00:06:59.755 [2024-12-16 10:38:59.687821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.755 [2024-12-16 10:38:59.720771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.700 10:39:00 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.700 10:39:00 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:00.700 10:39:00 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:00.700 10:39:00 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:00.700 10:39:00 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:00.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.961 Waiting for block devices as requested 00:07:00.961 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.221 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.221 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.221 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.530 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:06.530 BYT; 00:07:06.530 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:06.530 BYT; 00:07:06.530 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:06.530 10:39:06 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:06.530 10:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:07.465 The operation has completed successfully. 00:07:07.465 10:39:07 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:08.401 The operation has completed successfully. 00:07:08.401 10:39:08 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:08.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.226 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.226 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.226 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.226 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.484 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:09.484 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.484 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.484 [] 00:07:09.484 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.484 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:09.484 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:09.484 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:09.484 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:09.484 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:09.484 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.484 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.743 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:09.743 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:09.744 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d39f15f4-b1c9-4fe8-8cf8-feef7fd7de91"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d39f15f4-b1c9-4fe8-8cf8-feef7fd7de91",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1214014e-53e8-4b56-b32c-eda7b9553c61"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1214014e-53e8-4b56-b32c-eda7b9553c61",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1d78a4bf-6537-498c-8f3a-6fb51fb8d5e0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d78a4bf-6537-498c-8f3a-6fb51fb8d5e0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ff5e794b-effc-4129-9b5e-bd536fa4f49f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ff5e794b-effc-4129-9b5e-bd536fa4f49f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d45633fb-4035-451f-84e3-28bbaede4182"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d45633fb-4035-451f-84e3-28bbaede4182",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:09.744 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:09.744 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:09.744 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:09.744 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72699 00:07:09.744 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 72699 ']' 00:07:09.744 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 72699 00:07:09.744 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:09.744 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72699 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.004 killing process with pid 72699 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72699' 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 72699 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 72699 00:07:10.004 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:10.004 10:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.004 10:39:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.263 ************************************ 00:07:10.263 START TEST bdev_hello_world 00:07:10.263 ************************************ 00:07:10.263 10:39:09 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.263 [2024-12-16 10:39:10.048842] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:10.263 [2024-12-16 10:39:10.048949] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73315 ] 00:07:10.263 [2024-12-16 10:39:10.178487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.263 [2024-12-16 10:39:10.208908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.831 [2024-12-16 10:39:10.563979] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:10.831 [2024-12-16 10:39:10.564016] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:10.831 [2024-12-16 10:39:10.564037] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:10.831 [2024-12-16 10:39:10.565604] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:10.831 [2024-12-16 10:39:10.565896] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:10.831 [2024-12-16 10:39:10.565920] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:10.832 [2024-12-16 10:39:10.566111] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:10.832 00:07:10.832 [2024-12-16 10:39:10.566131] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:10.832 00:07:10.832 real 0m0.697s 00:07:10.832 user 0m0.475s 00:07:10.832 sys 0m0.120s 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:10.832 ************************************ 00:07:10.832 END TEST bdev_hello_world 00:07:10.832 ************************************ 00:07:10.832 10:39:10 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:10.832 10:39:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:10.832 10:39:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.832 10:39:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.832 ************************************ 00:07:10.832 START TEST bdev_bounds 00:07:10.832 ************************************ 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:10.832 Process bdevio pid: 73337 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73337 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73337' 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73337 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73337 ']' 00:07:10.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.832 10:39:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:10.832 [2024-12-16 10:39:10.813573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:10.832 [2024-12-16 10:39:10.813695] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73337 ] 00:07:11.091 [2024-12-16 10:39:10.947696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.091 [2024-12-16 10:39:10.985404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.091 [2024-12-16 10:39:10.985526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.091 [2024-12-16 10:39:10.985601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.026 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.026 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:12.026 10:39:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:12.026 I/O targets: 00:07:12.026 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:12.026 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:12.026 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:12.026 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.026 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.026 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.026 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:12.026 00:07:12.026 00:07:12.026 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.026 http://cunit.sourceforge.net/ 00:07:12.026 00:07:12.026 00:07:12.026 Suite: bdevio tests on: Nvme3n1 00:07:12.027 Test: blockdev write read block ...passed 00:07:12.027 Test: blockdev write zeroes read block ...passed 00:07:12.027 Test: blockdev write zeroes read no split ...passed 00:07:12.027 Test: blockdev write zeroes read split ...passed 00:07:12.027 Test: blockdev write zeroes read split partial ...passed 00:07:12.027 Test: blockdev reset ...[2024-12-16 10:39:11.764700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:12.027 [2024-12-16 10:39:11.766653] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.027 passed 00:07:12.027 Test: blockdev write read 8 blocks ...passed 00:07:12.027 Test: blockdev write read size > 128k ...passed 00:07:12.027 Test: blockdev write read invalid size ...passed 00:07:12.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.027 Test: blockdev write read max offset ...passed 00:07:12.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.027 Test: blockdev writev readv 8 blocks ...passed 00:07:12.027 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.027 Test: blockdev writev readv block ...passed 00:07:12.027 Test: blockdev writev readv size > 128k ...passed 00:07:12.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.027 Test: blockdev comparev and writev ...[2024-12-16 10:39:11.773480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c320a000 len:0x1000 00:07:12.027 [2024-12-16 10:39:11.773587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme passthru rw ...passed 00:07:12.027 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:39:11.774371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.027 [2024-12-16 10:39:11.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme admin passthru ...passed 00:07:12.027 Test: blockdev copy ...passed 00:07:12.027 Suite: bdevio tests on: Nvme2n3 00:07:12.027 Test: blockdev write read block ...passed 00:07:12.027 Test: blockdev write zeroes read block ...passed 00:07:12.027 Test: blockdev write zeroes read no split ...passed 00:07:12.027 Test: blockdev write zeroes read split ...passed 00:07:12.027 Test: blockdev write zeroes read split partial ...passed 00:07:12.027 Test: blockdev reset ...[2024-12-16 10:39:11.789069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:12.027 [2024-12-16 10:39:11.791169] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.027 passed 00:07:12.027 Test: blockdev write read 8 blocks ...passed 00:07:12.027 Test: blockdev write read size > 128k ...passed 00:07:12.027 Test: blockdev write read invalid size ...passed 00:07:12.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.027 Test: blockdev write read max offset ...passed 00:07:12.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.027 Test: blockdev writev readv 8 blocks ...passed 00:07:12.027 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.027 Test: blockdev writev readv block ...passed 00:07:12.027 Test: blockdev writev readv size > 128k ...passed 00:07:12.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.027 Test: blockdev comparev and writev ...[2024-12-16 10:39:11.797538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af204000 len:0x1000 00:07:12.027 [2024-12-16 10:39:11.797621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme passthru rw ...passed 00:07:12.027 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:39:11.798308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme admin passthru ...[2024-12-16 10:39:11.798363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev copy ...passed 00:07:12.027 Suite: bdevio tests on: Nvme2n2 00:07:12.027 Test: blockdev write read block ...passed 00:07:12.027 Test: blockdev write zeroes read block ...passed 00:07:12.027 Test: blockdev write zeroes read no split ...passed 00:07:12.027 Test: blockdev write zeroes read split ...passed 00:07:12.027 Test: blockdev write zeroes read split partial ...passed 00:07:12.027 Test: blockdev reset ...[2024-12-16 10:39:11.812512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:12.027 [2024-12-16 10:39:11.814048] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.027 passed 00:07:12.027 Test: blockdev write read 8 blocks ...passed 00:07:12.027 Test: blockdev write read size > 128k ...passed 00:07:12.027 Test: blockdev write read invalid size ...passed 00:07:12.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.027 Test: blockdev write read max offset ...passed 00:07:12.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.027 Test: blockdev writev readv 8 blocks ...passed 00:07:12.027 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.027 Test: blockdev writev readv block ...passed 00:07:12.027 Test: blockdev writev readv size > 128k ...passed 00:07:12.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.027 Test: blockdev comparev and writev ...[2024-12-16 10:39:11.818150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af204000 len:0x1000 00:07:12.027 [2024-12-16 10:39:11.818191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme passthru rw ...passed 00:07:12.027 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:39:11.818650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme admin passthru ...[2024-12-16 10:39:11.818677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev copy ...passed 00:07:12.027 Suite: bdevio tests on: Nvme2n1 00:07:12.027 Test: blockdev write read block ...passed 00:07:12.027 Test: blockdev write zeroes read block ...passed 00:07:12.027 Test: blockdev write zeroes read no split ...passed 00:07:12.027 Test: blockdev write zeroes read split ...passed 00:07:12.027 Test: blockdev write zeroes read split partial ...passed 00:07:12.027 Test: blockdev reset ...[2024-12-16 10:39:11.834371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:12.027 [2024-12-16 10:39:11.835976] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.027 passed 00:07:12.027 Test: blockdev write read 8 blocks ...passed 00:07:12.027 Test: blockdev write read size > 128k ...passed 00:07:12.027 Test: blockdev write read invalid size ...passed 00:07:12.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.027 Test: blockdev write read max offset ...passed 00:07:12.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.027 Test: blockdev writev readv 8 blocks ...passed 00:07:12.027 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.027 Test: blockdev writev readv block ...passed 00:07:12.027 Test: blockdev writev readv size > 128k ...passed 00:07:12.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.027 Test: blockdev comparev and writev ...[2024-12-16 10:39:11.840777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af206000 len:0x1000 00:07:12.027 [2024-12-16 10:39:11.840815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme passthru rw ...passed 00:07:12.027 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:39:11.841516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.027 [2024-12-16 10:39:11.841541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme admin passthru ...passed 00:07:12.027 Test: blockdev copy ...passed 00:07:12.027 Suite: bdevio tests on: Nvme1n1p2 00:07:12.027 Test: blockdev write read block ...passed 00:07:12.027 Test: blockdev write zeroes read block ...passed 00:07:12.027 Test: blockdev write zeroes read no split ...passed 00:07:12.027 Test: blockdev write zeroes read split ...passed 00:07:12.027 Test: blockdev write zeroes read split partial ...passed 00:07:12.027 Test: blockdev reset ...[2024-12-16 10:39:11.857822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:12.027 [2024-12-16 10:39:11.859148] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.027 passed 00:07:12.027 Test: blockdev write read 8 blocks ...passed 00:07:12.027 Test: blockdev write read size > 128k ...passed 00:07:12.027 Test: blockdev write read invalid size ...passed 00:07:12.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.027 Test: blockdev write read max offset ...passed 00:07:12.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.027 Test: blockdev writev readv 8 blocks ...passed 00:07:12.027 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.027 Test: blockdev writev readv block ...passed 00:07:12.027 Test: blockdev writev readv size > 128k ...passed 00:07:12.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.027 Test: blockdev comparev and writev ...[2024-12-16 10:39:11.864762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2af202000 len:0x1000 00:07:12.027 [2024-12-16 10:39:11.864800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.027 passed 00:07:12.027 Test: blockdev nvme passthru rw ...passed 00:07:12.027 Test: blockdev nvme passthru vendor specific ...passed 00:07:12.027 Test: blockdev nvme admin passthru ...passed 00:07:12.027 Test: blockdev copy ...passed 00:07:12.027 Suite: bdevio tests on: Nvme1n1p1 00:07:12.027 Test: blockdev write read block ...passed 00:07:12.027 Test: blockdev write zeroes read block ...passed 00:07:12.027 Test: blockdev write zeroes read no split ...passed 00:07:12.027 Test: blockdev write zeroes read split ...passed 00:07:12.027 Test: blockdev write zeroes read split partial ...passed 00:07:12.027 Test: blockdev reset ...[2024-12-16 10:39:11.875730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:12.027 [2024-12-16 10:39:11.877061] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.027 passed 00:07:12.027 Test: blockdev write read 8 blocks ...passed 00:07:12.027 Test: blockdev write read size > 128k ...passed 00:07:12.027 Test: blockdev write read invalid size ...passed 00:07:12.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.028 Test: blockdev write read max offset ...passed 00:07:12.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.028 Test: blockdev writev readv 8 blocks ...passed 00:07:12.028 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.028 Test: blockdev writev readv block ...passed 00:07:12.028 Test: blockdev writev readv size > 128k ...passed 00:07:12.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.028 Test: blockdev comparev and writev ...[2024-12-16 10:39:11.881542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c643b000 len:0x1000 00:07:12.028 [2024-12-16 10:39:11.881580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.028 passed 00:07:12.028 Test: blockdev nvme passthru rw ...passed 00:07:12.028 Test: blockdev nvme passthru vendor specific ...passed 00:07:12.028 Test: blockdev nvme admin passthru ...passed 00:07:12.028 Test: blockdev copy ...passed 00:07:12.028 Suite: bdevio tests on: Nvme0n1 00:07:12.028 Test: blockdev write read block ...passed 00:07:12.028 Test: blockdev write zeroes read block ...passed 00:07:12.028 Test: blockdev write zeroes read no split ...passed 00:07:12.028 Test: blockdev write zeroes read split ...passed 00:07:12.028 Test: blockdev write zeroes read split partial ...passed 00:07:12.028 Test: blockdev reset ...[2024-12-16 10:39:11.892469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:12.028 [2024-12-16 10:39:11.893767] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.028 passed 00:07:12.028 Test: blockdev write read 8 blocks ...passed 00:07:12.028 Test: blockdev write read size > 128k ...passed 00:07:12.028 Test: blockdev write read invalid size ...passed 00:07:12.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.028 Test: blockdev write read max offset ...passed 00:07:12.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.028 Test: blockdev writev readv 8 blocks ...passed 00:07:12.028 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.028 Test: blockdev writev readv block ...passed 00:07:12.028 Test: blockdev writev readv size > 128k ...passed 00:07:12.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.028 Test: blockdev comparev and writev ...passed 00:07:12.028 Test: blockdev nvme passthru rw ...[2024-12-16 10:39:11.897123] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:12.028 separate metadata which is not supported yet. 00:07:12.028 passed 00:07:12.028 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:39:11.897533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:12.028 [2024-12-16 10:39:11.897566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:12.028 passed 00:07:12.028 Test: blockdev nvme admin passthru ...passed 00:07:12.028 Test: blockdev copy ...passed 00:07:12.028 00:07:12.028 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.028 suites 7 7 n/a 0 0 00:07:12.028 tests 161 161 161 0 0 00:07:12.028 asserts 1025 1025 1025 0 n/a 00:07:12.028 00:07:12.028 Elapsed time = 0.368 seconds 00:07:12.028 0 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73337 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73337 ']' 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73337 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73337 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.028 killing process with pid 73337 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73337' 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73337 00:07:12.028 10:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73337 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:12.289 00:07:12.289 real 0m1.325s 00:07:12.289 user 0m3.431s 00:07:12.289 sys 0m0.243s 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:12.289 ************************************ 00:07:12.289 END TEST bdev_bounds 00:07:12.289 ************************************ 00:07:12.289 10:39:12 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:12.289 10:39:12 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:12.289 10:39:12 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.289 10:39:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.289 ************************************ 00:07:12.289 START TEST bdev_nbd 00:07:12.289 ************************************ 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73391 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73391 /var/tmp/spdk-nbd.sock 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73391 ']' 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.289 10:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:12.289 [2024-12-16 10:39:12.208192] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:12.289 [2024-12-16 10:39:12.208303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.548 [2024-12-16 10:39:12.343590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.548 [2024-12-16 10:39:12.382663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.113 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.114 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.371 1+0 records in 00:07:13.371 1+0 records out 00:07:13.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297433 s, 13.8 MB/s 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.371 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.372 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.630 1+0 records in 00:07:13.630 1+0 records out 00:07:13.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264563 s, 15.5 MB/s 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.630 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.889 1+0 records in 00:07:13.889 1+0 records out 00:07:13.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370348 s, 11.1 MB/s 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.889 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.148 1+0 records in 00:07:14.148 1+0 records out 00:07:14.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527253 s, 7.8 MB/s 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.148 10:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.148 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.148 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.406 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.406 1+0 records in 00:07:14.406 1+0 records out 00:07:14.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389073 s, 10.5 MB/s 00:07:14.407 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.407 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.407 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.407 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.407 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.407 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.407 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.407 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.666 1+0 records in 00:07:14.666 1+0 records out 00:07:14.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924649 s, 4.4 MB/s 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.666 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.927 1+0 records in 00:07:14.927 1+0 records out 00:07:14.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000951078 s, 4.3 MB/s 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd0", 00:07:14.927 "bdev_name": "Nvme0n1" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd1", 00:07:14.927 "bdev_name": "Nvme1n1p1" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd2", 00:07:14.927 "bdev_name": "Nvme1n1p2" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd3", 00:07:14.927 "bdev_name": "Nvme2n1" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd4", 00:07:14.927 "bdev_name": "Nvme2n2" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd5", 00:07:14.927 "bdev_name": "Nvme2n3" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd6", 00:07:14.927 "bdev_name": "Nvme3n1" 00:07:14.927 } 00:07:14.927 ]' 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd0", 00:07:14.927 "bdev_name": "Nvme0n1" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd1", 00:07:14.927 "bdev_name": "Nvme1n1p1" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd2", 00:07:14.927 "bdev_name": "Nvme1n1p2" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd3", 00:07:14.927 "bdev_name": "Nvme2n1" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd4", 00:07:14.927 "bdev_name": "Nvme2n2" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd5", 00:07:14.927 "bdev_name": "Nvme2n3" 00:07:14.927 }, 00:07:14.927 { 00:07:14.927 "nbd_device": "/dev/nbd6", 00:07:14.927 "bdev_name": "Nvme3n1" 00:07:14.927 } 00:07:14.927 ]' 00:07:14.927 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:15.188 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:15.188 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.188 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:15.188 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.188 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:15.188 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.188 10:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.188 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.448 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.709 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.968 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.227 10:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.227 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.487 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.746 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:17.005 /dev/nbd0 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.005 1+0 records in 00:07:17.005 1+0 records out 00:07:17.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423957 s, 9.7 MB/s 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.005 10:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:17.263 /dev/nbd1 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.263 1+0 records in 00:07:17.263 1+0 records out 00:07:17.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479574 s, 8.5 MB/s 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.263 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:17.522 /dev/nbd10 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.522 1+0 records in 00:07:17.522 1+0 records out 00:07:17.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375073 s, 10.9 MB/s 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.522 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:17.780 /dev/nbd11 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.780 1+0 records in 00:07:17.780 1+0 records out 00:07:17.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053369 s, 7.7 MB/s 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.780 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.781 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.781 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:18.039 /dev/nbd12 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.039 1+0 records in 00:07:18.039 1+0 records out 00:07:18.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456024 s, 9.0 MB/s 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.039 10:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:18.039 /dev/nbd13 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.039 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.298 1+0 records in 00:07:18.298 1+0 records out 00:07:18.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512888 s, 8.0 MB/s 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:18.298 /dev/nbd14 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.298 1+0 records in 00:07:18.298 1+0 records out 00:07:18.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060223 s, 6.8 MB/s 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.298 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd0", 00:07:18.556 "bdev_name": "Nvme0n1" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd1", 00:07:18.556 "bdev_name": "Nvme1n1p1" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd10", 00:07:18.556 "bdev_name": "Nvme1n1p2" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd11", 00:07:18.556 "bdev_name": "Nvme2n1" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd12", 00:07:18.556 "bdev_name": "Nvme2n2" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd13", 00:07:18.556 "bdev_name": "Nvme2n3" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd14", 00:07:18.556 "bdev_name": "Nvme3n1" 00:07:18.556 } 00:07:18.556 ]' 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd0", 00:07:18.556 "bdev_name": "Nvme0n1" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd1", 00:07:18.556 "bdev_name": "Nvme1n1p1" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd10", 00:07:18.556 "bdev_name": "Nvme1n1p2" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd11", 00:07:18.556 "bdev_name": "Nvme2n1" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd12", 00:07:18.556 "bdev_name": "Nvme2n2" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd13", 00:07:18.556 "bdev_name": "Nvme2n3" 00:07:18.556 }, 00:07:18.556 { 00:07:18.556 "nbd_device": "/dev/nbd14", 00:07:18.556 "bdev_name": "Nvme3n1" 00:07:18.556 } 00:07:18.556 ]' 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.556 /dev/nbd1 00:07:18.556 /dev/nbd10 00:07:18.556 /dev/nbd11 00:07:18.556 /dev/nbd12 00:07:18.556 /dev/nbd13 00:07:18.556 /dev/nbd14' 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.556 /dev/nbd1 00:07:18.556 /dev/nbd10 00:07:18.556 /dev/nbd11 00:07:18.556 /dev/nbd12 00:07:18.556 /dev/nbd13 00:07:18.556 /dev/nbd14' 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:18.556 256+0 records in 00:07:18.556 256+0 records out 00:07:18.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00781411 s, 134 MB/s 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.556 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.816 256+0 records in 00:07:18.816 256+0 records out 00:07:18.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0597922 s, 17.5 MB/s 00:07:18.816 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.816 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.816 256+0 records in 00:07:18.816 256+0 records out 00:07:18.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0570514 s, 18.4 MB/s 00:07:18.816 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.816 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:18.816 256+0 records in 00:07:18.816 256+0 records out 00:07:18.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0601047 s, 17.4 MB/s 00:07:18.816 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.816 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:19.077 256+0 records in 00:07:19.077 256+0 records out 00:07:19.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137339 s, 7.6 MB/s 00:07:19.077 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.077 10:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:19.338 256+0 records in 00:07:19.338 256+0 records out 00:07:19.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237542 s, 4.4 MB/s 00:07:19.338 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.338 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:19.338 256+0 records in 00:07:19.338 256+0 records out 00:07:19.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24276 s, 4.3 MB/s 00:07:19.338 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.338 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:19.599 256+0 records in 00:07:19.599 256+0 records out 00:07:19.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.211388 s, 5.0 MB/s 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.599 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.860 10:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.121 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.381 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.642 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.904 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.165 10:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.165 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:21.423 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:21.684 malloc_lvol_verify 00:07:21.684 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:21.944 f3cfe600-f98f-41b6-a9f0-0fc36ab008f2 00:07:21.944 10:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:22.206 fceeec02-1135-4d4a-8c7e-81bf2aaffe31 00:07:22.206 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:22.466 /dev/nbd0 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:22.466 mke2fs 1.47.0 (5-Feb-2023) 00:07:22.466 Discarding device blocks: 0/4096 done 00:07:22.466 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:22.466 00:07:22.466 Allocating group tables: 0/1 done 00:07:22.466 Writing inode tables: 0/1 done 00:07:22.466 Creating journal (1024 blocks): done 00:07:22.466 Writing superblocks and filesystem accounting information: 0/1 done 00:07:22.466 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.466 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73391 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73391 ']' 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73391 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73391 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.727 killing process with pid 73391 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73391' 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73391 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73391 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:22.727 00:07:22.727 real 0m10.548s 00:07:22.727 user 0m15.057s 00:07:22.727 sys 0m3.611s 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.727 ************************************ 00:07:22.727 END TEST bdev_nbd 00:07:22.727 ************************************ 00:07:22.727 10:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:22.988 10:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:22.988 10:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:22.988 skipping fio tests on NVMe due to multi-ns failures. 00:07:22.988 10:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:22.988 10:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:22.988 10:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:22.988 10:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.988 10:39:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:22.988 10:39:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.988 10:39:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.988 ************************************ 00:07:22.988 START TEST bdev_verify 00:07:22.988 ************************************ 00:07:22.988 10:39:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.988 [2024-12-16 10:39:22.804279] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.988 [2024-12-16 10:39:22.804375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73794 ] 00:07:22.988 [2024-12-16 10:39:22.938038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.249 [2024-12-16 10:39:23.013909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.249 [2024-12-16 10:39:23.013996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.821 Running I/O for 5 seconds... 00:07:26.138 21312.00 IOPS, 83.25 MiB/s [2024-12-16T10:39:27.068Z] 21632.00 IOPS, 84.50 MiB/s [2024-12-16T10:39:28.012Z] 21162.67 IOPS, 82.67 MiB/s [2024-12-16T10:39:28.958Z] 20960.00 IOPS, 81.88 MiB/s [2024-12-16T10:39:28.958Z] 20160.00 IOPS, 78.75 MiB/s 00:07:28.969 Latency(us) 00:07:28.969 [2024-12-16T10:39:28.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.969 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x0 length 0xbd0bd 00:07:28.969 Nvme0n1 : 5.05 1545.20 6.04 0.00 0.00 82417.44 15526.99 85095.98 00:07:28.969 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:28.969 Nvme0n1 : 5.10 1268.38 4.95 0.00 0.00 100189.40 14317.10 104051.00 00:07:28.969 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x0 length 0x4ff80 00:07:28.969 Nvme1n1p1 : 5.09 1545.19 6.04 0.00 0.00 82143.86 12905.55 72593.72 00:07:28.969 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:28.969 Nvme1n1p1 : 5.13 1272.73 4.97 0.00 0.00 100156.31 18551.73 97598.23 00:07:28.969 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x0 length 0x4ff7f 00:07:28.969 Nvme1n1p2 : 5.10 1544.59 6.03 0.00 0.00 82006.20 13006.38 72593.72 00:07:28.969 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:28.969 Nvme1n1p2 : 5.14 1270.96 4.96 0.00 0.00 99875.92 20669.05 89935.56 00:07:28.969 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x0 length 0x80000 00:07:28.969 Nvme2n1 : 5.13 1548.47 6.05 0.00 0.00 81966.50 19156.68 68964.04 00:07:28.969 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x80000 length 0x80000 00:07:28.969 Nvme2n1 : 5.14 1269.72 4.96 0.00 0.00 99747.25 19761.62 89532.26 00:07:28.969 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x0 length 0x80000 00:07:28.969 Nvme2n2 : 5.13 1547.91 6.05 0.00 0.00 81881.87 19459.15 71383.83 00:07:28.969 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x80000 length 0x80000 00:07:28.969 Nvme2n2 : 5.14 1269.37 4.96 0.00 0.00 99614.73 17845.96 95178.44 00:07:28.969 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x0 length 0x80000 00:07:28.969 Nvme2n3 : 5.13 1547.22 6.04 0.00 0.00 81791.43 16938.54 73400.32 00:07:28.969 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x80000 length 0x80000 00:07:28.969 Nvme2n3 : 5.14 1269.02 4.96 0.00 0.00 99510.78 16232.76 101227.91 00:07:28.969 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x0 length 0x20000 00:07:28.969 Nvme3n1 : 5.14 1545.15 6.04 0.00 0.00 81724.23 15325.34 73803.62 00:07:28.969 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.969 Verification LBA range: start 0x20000 length 0x20000 00:07:28.969 Nvme3n1 : 5.15 1268.65 4.96 0.00 0.00 99403.58 14417.92 104051.00 00:07:28.969 [2024-12-16T10:39:28.958Z] =================================================================================================================== 00:07:28.969 [2024-12-16T10:39:28.958Z] Total : 19712.58 77.00 0.00 0.00 90034.96 12905.55 104051.00 00:07:29.543 00:07:29.543 real 0m6.746s 00:07:29.543 user 0m12.542s 00:07:29.543 sys 0m0.325s 00:07:29.544 10:39:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.544 10:39:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:29.544 ************************************ 00:07:29.544 END TEST bdev_verify 00:07:29.544 ************************************ 00:07:29.804 10:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:29.804 10:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:29.804 10:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.804 10:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.804 ************************************ 00:07:29.804 START TEST bdev_verify_big_io 00:07:29.804 ************************************ 00:07:29.804 10:39:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:29.804 [2024-12-16 10:39:29.635209] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.804 [2024-12-16 10:39:29.635345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73887 ] 00:07:29.804 [2024-12-16 10:39:29.770715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.064 [2024-12-16 10:39:29.847502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.064 [2024-12-16 10:39:29.847581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.636 Running I/O for 5 seconds... 00:07:35.329 764.00 IOPS, 47.75 MiB/s [2024-12-16T10:39:36.702Z] 1745.50 IOPS, 109.09 MiB/s [2024-12-16T10:39:36.963Z] 2448.00 IOPS, 153.00 MiB/s [2024-12-16T10:39:36.963Z] 2277.25 IOPS, 142.33 MiB/s 00:07:36.974 Latency(us) 00:07:36.974 [2024-12-16T10:39:36.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.974 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.974 Verification LBA range: start 0x0 length 0xbd0b 00:07:36.974 Nvme0n1 : 5.86 109.18 6.82 0.00 0.00 1105564.75 31457.28 1071160.71 00:07:36.974 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.974 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:36.974 Nvme0n1 : 6.01 74.56 4.66 0.00 0.00 1646722.42 25306.98 1651910.50 00:07:36.974 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.974 Verification LBA range: start 0x0 length 0x4ff8 00:07:36.974 Nvme1n1p1 : 5.86 112.88 7.06 0.00 0.00 1053002.54 106470.79 987274.63 00:07:36.974 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.974 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:36.974 Nvme1n1p1 : 6.09 80.61 5.04 0.00 0.00 1466524.83 35893.56 1380893.93 00:07:36.974 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.974 Verification LBA range: start 0x0 length 0x4ff7 00:07:36.974 Nvme1n1p2 : 5.87 114.18 7.14 0.00 0.00 1016732.88 143574.25 1071160.71 00:07:36.974 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.974 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:36.974 Nvme1n1p2 : 6.09 78.95 4.93 0.00 0.00 1426570.00 34280.37 1729343.80 00:07:36.974 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.974 Verification LBA range: start 0x0 length 0x8000 00:07:36.975 Nvme2n1 : 5.91 116.61 7.29 0.00 0.00 980298.42 37506.76 1213121.77 00:07:36.975 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.975 Verification LBA range: start 0x8000 length 0x8000 00:07:36.975 Nvme2n1 : 6.09 80.41 5.03 0.00 0.00 1337062.80 36095.21 1426063.36 00:07:36.975 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.975 Verification LBA range: start 0x0 length 0x8000 00:07:36.975 Nvme2n2 : 5.96 124.06 7.75 0.00 0.00 904290.52 34683.67 1032444.06 00:07:36.975 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.975 Verification LBA range: start 0x8000 length 0x8000 00:07:36.975 Nvme2n2 : 6.15 93.73 5.86 0.00 0.00 1117552.29 24097.08 1432516.14 00:07:36.975 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.975 Verification LBA range: start 0x0 length 0x8000 00:07:36.975 Nvme2n3 : 5.96 124.54 7.78 0.00 0.00 877350.26 35288.62 1032444.06 00:07:36.975 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.975 Verification LBA range: start 0x8000 length 0x8000 00:07:36.975 Nvme2n3 : 6.25 110.39 6.90 0.00 0.00 915639.18 12098.95 2774693.42 00:07:36.975 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.975 Verification LBA range: start 0x0 length 0x2000 00:07:36.975 Nvme3n1 : 5.97 133.11 8.32 0.00 0.00 801735.82 3327.21 1032444.06 00:07:36.975 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.975 Verification LBA range: start 0x2000 length 0x2000 00:07:36.975 Nvme3n1 : 6.43 175.07 10.94 0.00 0.00 556720.21 718.38 2826315.62 00:07:36.975 [2024-12-16T10:39:36.964Z] =================================================================================================================== 00:07:36.975 [2024-12-16T10:39:36.964Z] Total : 1528.27 95.52 0.00 0.00 1019603.36 718.38 2826315.62 00:07:38.889 00:07:38.889 real 0m8.953s 00:07:38.889 user 0m16.604s 00:07:38.889 sys 0m0.358s 00:07:38.889 10:39:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.889 10:39:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:38.889 ************************************ 00:07:38.889 END TEST bdev_verify_big_io 00:07:38.889 ************************************ 00:07:38.889 10:39:38 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.889 10:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:38.889 10:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.889 10:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.889 ************************************ 00:07:38.889 START TEST bdev_write_zeroes 00:07:38.889 ************************************ 00:07:38.889 10:39:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.889 [2024-12-16 10:39:38.668498] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:38.889 [2024-12-16 10:39:38.668653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73996 ] 00:07:38.889 [2024-12-16 10:39:38.808574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.889 [2024-12-16 10:39:38.860342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.462 Running I/O for 1 seconds... 00:07:40.404 46976.00 IOPS, 183.50 MiB/s 00:07:40.404 Latency(us) 00:07:40.404 [2024-12-16T10:39:40.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.404 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.404 Nvme0n1 : 1.03 6700.84 26.18 0.00 0.00 19050.28 8267.62 30045.74 00:07:40.404 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.404 Nvme1n1p1 : 1.03 6692.62 26.14 0.00 0.00 19050.21 14922.04 29440.79 00:07:40.404 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.404 Nvme1n1p2 : 1.03 6684.24 26.11 0.00 0.00 18947.15 10485.76 28432.54 00:07:40.404 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.404 Nvme2n1 : 1.04 6676.73 26.08 0.00 0.00 18919.30 9124.63 28029.24 00:07:40.404 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.404 Nvme2n2 : 1.04 6669.25 26.05 0.00 0.00 18904.19 8922.98 28835.84 00:07:40.404 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.404 Nvme2n3 : 1.04 6661.80 26.02 0.00 0.00 18880.91 8620.50 30045.74 00:07:40.404 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.404 Nvme3n1 : 1.04 6592.70 25.75 0.00 0.00 19051.13 11292.36 31053.98 00:07:40.404 [2024-12-16T10:39:40.393Z] =================================================================================================================== 00:07:40.404 [2024-12-16T10:39:40.393Z] Total : 46678.19 182.34 0.00 0.00 18971.78 8267.62 31053.98 00:07:40.665 00:07:40.666 real 0m1.971s 00:07:40.666 user 0m1.627s 00:07:40.666 sys 0m0.224s 00:07:40.666 10:39:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.666 ************************************ 00:07:40.666 END TEST bdev_write_zeroes 00:07:40.666 ************************************ 00:07:40.666 10:39:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:40.666 10:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.666 10:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:40.666 10:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.666 10:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.666 ************************************ 00:07:40.666 START TEST bdev_json_nonenclosed 00:07:40.666 ************************************ 00:07:40.666 10:39:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.948 [2024-12-16 10:39:40.697999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.948 [2024-12-16 10:39:40.698131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74038 ] 00:07:40.948 [2024-12-16 10:39:40.833437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.948 [2024-12-16 10:39:40.888090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.948 [2024-12-16 10:39:40.888219] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:40.948 [2024-12-16 10:39:40.888236] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.948 [2024-12-16 10:39:40.888253] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.209 00:07:41.209 real 0m0.359s 00:07:41.209 user 0m0.151s 00:07:41.209 sys 0m0.103s 00:07:41.209 10:39:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.209 ************************************ 00:07:41.209 END TEST bdev_json_nonenclosed 00:07:41.209 ************************************ 00:07:41.209 10:39:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:41.209 10:39:41 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.209 10:39:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:41.209 10:39:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.209 10:39:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.209 ************************************ 00:07:41.209 START TEST bdev_json_nonarray 00:07:41.209 ************************************ 00:07:41.209 10:39:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.209 [2024-12-16 10:39:41.122202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.209 [2024-12-16 10:39:41.122316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74058 ] 00:07:41.471 [2024-12-16 10:39:41.258987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.471 [2024-12-16 10:39:41.293161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.471 [2024-12-16 10:39:41.293266] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:41.471 [2024-12-16 10:39:41.293282] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.471 [2024-12-16 10:39:41.293294] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.471 00:07:41.471 real 0m0.314s 00:07:41.471 user 0m0.125s 00:07:41.471 sys 0m0.085s 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.471 ************************************ 00:07:41.471 END TEST bdev_json_nonarray 00:07:41.471 ************************************ 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:41.471 10:39:41 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:41.471 10:39:41 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:41.471 10:39:41 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:41.471 10:39:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.471 10:39:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.471 10:39:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.471 ************************************ 00:07:41.471 START TEST bdev_gpt_uuid 00:07:41.471 ************************************ 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74088 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:41.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74088 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74088 ']' 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.471 10:39:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:41.732 [2024-12-16 10:39:41.501235] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.732 [2024-12-16 10:39:41.501356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74088 ] 00:07:41.732 [2024-12-16 10:39:41.637503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.732 [2024-12-16 10:39:41.674107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.673 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.673 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:42.673 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.673 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.673 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.935 Some configs were skipped because the RPC state that can call them passed over. 00:07:42.935 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.935 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:42.935 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.935 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.935 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:42.936 { 00:07:42.936 "name": "Nvme1n1p1", 00:07:42.936 "aliases": [ 00:07:42.936 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:42.936 ], 00:07:42.936 "product_name": "GPT Disk", 00:07:42.936 "block_size": 4096, 00:07:42.936 "num_blocks": 655104, 00:07:42.936 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:42.936 "assigned_rate_limits": { 00:07:42.936 "rw_ios_per_sec": 0, 00:07:42.936 "rw_mbytes_per_sec": 0, 00:07:42.936 "r_mbytes_per_sec": 0, 00:07:42.936 "w_mbytes_per_sec": 0 00:07:42.936 }, 00:07:42.936 "claimed": false, 00:07:42.936 "zoned": false, 00:07:42.936 "supported_io_types": { 00:07:42.936 "read": true, 00:07:42.936 "write": true, 00:07:42.936 "unmap": true, 00:07:42.936 "flush": true, 00:07:42.936 "reset": true, 00:07:42.936 "nvme_admin": false, 00:07:42.936 "nvme_io": false, 00:07:42.936 "nvme_io_md": false, 00:07:42.936 "write_zeroes": true, 00:07:42.936 "zcopy": false, 00:07:42.936 "get_zone_info": false, 00:07:42.936 "zone_management": false, 00:07:42.936 "zone_append": false, 00:07:42.936 "compare": true, 00:07:42.936 "compare_and_write": false, 00:07:42.936 "abort": true, 00:07:42.936 "seek_hole": false, 00:07:42.936 "seek_data": false, 00:07:42.936 "copy": true, 00:07:42.936 "nvme_iov_md": false 00:07:42.936 }, 00:07:42.936 "driver_specific": { 00:07:42.936 "gpt": { 00:07:42.936 "base_bdev": "Nvme1n1", 00:07:42.936 "offset_blocks": 256, 00:07:42.936 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:42.936 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:42.936 "partition_name": "SPDK_TEST_first" 00:07:42.936 } 00:07:42.936 } 00:07:42.936 } 00:07:42.936 ]' 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:42.936 { 00:07:42.936 "name": "Nvme1n1p2", 00:07:42.936 "aliases": [ 00:07:42.936 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:42.936 ], 00:07:42.936 "product_name": "GPT Disk", 00:07:42.936 "block_size": 4096, 00:07:42.936 "num_blocks": 655103, 00:07:42.936 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:42.936 "assigned_rate_limits": { 00:07:42.936 "rw_ios_per_sec": 0, 00:07:42.936 "rw_mbytes_per_sec": 0, 00:07:42.936 "r_mbytes_per_sec": 0, 00:07:42.936 "w_mbytes_per_sec": 0 00:07:42.936 }, 00:07:42.936 "claimed": false, 00:07:42.936 "zoned": false, 00:07:42.936 "supported_io_types": { 00:07:42.936 "read": true, 00:07:42.936 "write": true, 00:07:42.936 "unmap": true, 00:07:42.936 "flush": true, 00:07:42.936 "reset": true, 00:07:42.936 "nvme_admin": false, 00:07:42.936 "nvme_io": false, 00:07:42.936 "nvme_io_md": false, 00:07:42.936 "write_zeroes": true, 00:07:42.936 "zcopy": false, 00:07:42.936 "get_zone_info": false, 00:07:42.936 "zone_management": false, 00:07:42.936 "zone_append": false, 00:07:42.936 "compare": true, 00:07:42.936 "compare_and_write": false, 00:07:42.936 "abort": true, 00:07:42.936 "seek_hole": false, 00:07:42.936 "seek_data": false, 00:07:42.936 "copy": true, 00:07:42.936 "nvme_iov_md": false 00:07:42.936 }, 00:07:42.936 "driver_specific": { 00:07:42.936 "gpt": { 00:07:42.936 "base_bdev": "Nvme1n1", 00:07:42.936 "offset_blocks": 655360, 00:07:42.936 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:42.936 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:42.936 "partition_name": "SPDK_TEST_second" 00:07:42.936 } 00:07:42.936 } 00:07:42.936 } 00:07:42.936 ]' 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74088 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74088 ']' 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74088 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.936 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74088 00:07:43.197 killing process with pid 74088 00:07:43.197 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.197 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.198 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74088' 00:07:43.198 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74088 00:07:43.198 10:39:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74088 00:07:43.459 ************************************ 00:07:43.459 END TEST bdev_gpt_uuid 00:07:43.459 ************************************ 00:07:43.459 00:07:43.459 real 0m1.782s 00:07:43.459 user 0m1.940s 00:07:43.459 sys 0m0.358s 00:07:43.459 10:39:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.459 10:39:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:43.459 10:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:43.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.982 Waiting for block devices as requested 00:07:43.982 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.982 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.982 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.243 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.534 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:49.534 10:39:49 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:49.534 10:39:49 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:49.534 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:49.534 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:49.534 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:49.534 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:49.534 10:39:49 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:49.534 00:07:49.534 real 0m50.040s 00:07:49.534 user 1m3.631s 00:07:49.534 sys 0m7.869s 00:07:49.534 10:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.534 10:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.534 ************************************ 00:07:49.534 END TEST blockdev_nvme_gpt 00:07:49.534 ************************************ 00:07:49.534 10:39:49 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:49.534 10:39:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.534 10:39:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.534 10:39:49 -- common/autotest_common.sh@10 -- # set +x 00:07:49.534 ************************************ 00:07:49.534 START TEST nvme 00:07:49.534 ************************************ 00:07:49.534 10:39:49 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:49.534 * Looking for test storage... 00:07:49.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:49.534 10:39:49 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:49.534 10:39:49 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:49.534 10:39:49 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:49.796 10:39:49 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:49.796 10:39:49 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.796 10:39:49 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.796 10:39:49 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.796 10:39:49 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.796 10:39:49 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.796 10:39:49 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.796 10:39:49 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.796 10:39:49 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.796 10:39:49 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.796 10:39:49 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.796 10:39:49 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.796 10:39:49 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:49.796 10:39:49 nvme -- scripts/common.sh@345 -- # : 1 00:07:49.796 10:39:49 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.796 10:39:49 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.796 10:39:49 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:49.796 10:39:49 nvme -- scripts/common.sh@353 -- # local d=1 00:07:49.796 10:39:49 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.796 10:39:49 nvme -- scripts/common.sh@355 -- # echo 1 00:07:49.796 10:39:49 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.796 10:39:49 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:49.796 10:39:49 nvme -- scripts/common.sh@353 -- # local d=2 00:07:49.796 10:39:49 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.796 10:39:49 nvme -- scripts/common.sh@355 -- # echo 2 00:07:49.796 10:39:49 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.796 10:39:49 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.796 10:39:49 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.796 10:39:49 nvme -- scripts/common.sh@368 -- # return 0 00:07:49.796 10:39:49 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.796 10:39:49 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.796 --rc genhtml_branch_coverage=1 00:07:49.796 --rc genhtml_function_coverage=1 00:07:49.796 --rc genhtml_legend=1 00:07:49.796 --rc geninfo_all_blocks=1 00:07:49.796 --rc geninfo_unexecuted_blocks=1 00:07:49.796 00:07:49.796 ' 00:07:49.796 10:39:49 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.796 --rc genhtml_branch_coverage=1 00:07:49.796 --rc genhtml_function_coverage=1 00:07:49.796 --rc genhtml_legend=1 00:07:49.796 --rc geninfo_all_blocks=1 00:07:49.796 --rc geninfo_unexecuted_blocks=1 00:07:49.796 00:07:49.796 ' 00:07:49.796 10:39:49 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.796 --rc genhtml_branch_coverage=1 00:07:49.796 --rc genhtml_function_coverage=1 00:07:49.796 --rc genhtml_legend=1 00:07:49.796 --rc geninfo_all_blocks=1 00:07:49.796 --rc geninfo_unexecuted_blocks=1 00:07:49.796 00:07:49.796 ' 00:07:49.796 10:39:49 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.796 --rc genhtml_branch_coverage=1 00:07:49.796 --rc genhtml_function_coverage=1 00:07:49.796 --rc genhtml_legend=1 00:07:49.796 --rc geninfo_all_blocks=1 00:07:49.796 --rc geninfo_unexecuted_blocks=1 00:07:49.796 00:07:49.796 ' 00:07:49.796 10:39:49 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:50.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.631 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.631 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.631 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.891 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.891 10:39:50 nvme -- nvme/nvme.sh@79 -- # uname 00:07:50.891 10:39:50 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:50.891 10:39:50 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:50.891 10:39:50 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1071 -- # stubpid=74707 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:50.891 Waiting for stub to ready for secondary processes... 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/74707 ]] 00:07:50.891 10:39:50 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:50.891 [2024-12-16 10:39:50.740147] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.891 [2024-12-16 10:39:50.740269] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:51.835 [2024-12-16 10:39:51.469028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.835 [2024-12-16 10:39:51.489827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.835 [2024-12-16 10:39:51.490149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.835 [2024-12-16 10:39:51.490264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.835 [2024-12-16 10:39:51.500628] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:51.835 [2024-12-16 10:39:51.500667] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.835 [2024-12-16 10:39:51.515455] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:51.835 [2024-12-16 10:39:51.515639] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:51.835 [2024-12-16 10:39:51.516678] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.835 [2024-12-16 10:39:51.517066] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:51.835 [2024-12-16 10:39:51.517171] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:51.835 [2024-12-16 10:39:51.518222] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.835 [2024-12-16 10:39:51.518714] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:51.835 [2024-12-16 10:39:51.518858] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:51.835 [2024-12-16 10:39:51.520909] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.835 [2024-12-16 10:39:51.521078] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:51.835 [2024-12-16 10:39:51.521201] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:51.835 [2024-12-16 10:39:51.521309] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:51.835 [2024-12-16 10:39:51.521428] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:51.835 done. 00:07:51.835 10:39:51 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:51.835 10:39:51 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:51.835 10:39:51 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:51.835 10:39:51 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:51.835 10:39:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.835 10:39:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.835 ************************************ 00:07:51.835 START TEST nvme_reset 00:07:51.835 ************************************ 00:07:51.835 10:39:51 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:52.097 Initializing NVMe Controllers 00:07:52.097 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:52.097 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:52.097 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:52.097 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:52.097 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:52.097 00:07:52.097 real 0m0.177s 00:07:52.097 user 0m0.061s 00:07:52.097 sys 0m0.070s 00:07:52.097 10:39:51 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.097 ************************************ 00:07:52.097 END TEST nvme_reset 00:07:52.097 ************************************ 00:07:52.097 10:39:51 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:52.097 10:39:51 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:52.097 10:39:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.097 10:39:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.097 10:39:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.097 ************************************ 00:07:52.097 START TEST nvme_identify 00:07:52.097 ************************************ 00:07:52.097 10:39:51 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:52.097 10:39:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:52.097 10:39:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:52.097 10:39:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:52.097 10:39:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:52.097 10:39:51 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:52.097 10:39:51 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:52.097 10:39:51 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:52.097 10:39:51 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:52.097 10:39:51 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:52.097 10:39:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:52.097 10:39:52 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:52.097 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:52.359 ===================================================== 00:07:52.359 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.359 ===================================================== 00:07:52.359 Controller Capabilities/Features 00:07:52.359 ================================ 00:07:52.359 Vendor ID: 1b36 00:07:52.359 Subsystem Vendor ID: 1af4 00:07:52.359 Serial Number: 12343 00:07:52.359 Model Number: QEMU NVMe Ctrl 00:07:52.359 Firmware Version: 8.0.0 00:07:52.359 Recommended Arb Burst: 6 00:07:52.359 IEEE OUI Identifier: 00 54 52 00:07:52.359 Multi-path I/O 00:07:52.359 May have multiple subsystem ports: No 00:07:52.359 May have multiple controllers: Yes 00:07:52.359 Associated with SR-IOV VF: No 00:07:52.359 Max Data Transfer Size: 524288 00:07:52.359 Max Number of Namespaces: 256 00:07:52.359 Max Number of I/O Queues: 64 00:07:52.359 NVMe Specification Version (VS): 1.4 00:07:52.359 NVMe Specification Version (Identify): 1.4 00:07:52.359 Maximum Queue Entries: 2048 00:07:52.359 Contiguous Queues Required: Yes 00:07:52.359 Arbitration Mechanisms Supported 00:07:52.359 Weighted Round Robin: Not Supported 00:07:52.359 Vendor Specific: Not Supported 00:07:52.359 Reset Timeout: 7500 ms 00:07:52.359 Doorbell Stride: 4 bytes 00:07:52.359 NVM Subsystem Reset: Not Supported 00:07:52.359 Command Sets Supported 00:07:52.359 NVM Command Set: Supported 00:07:52.359 Boot Partition: Not Supported 00:07:52.359 Memory Page Size Minimum: 4096 bytes 00:07:52.359 Memory Page Size Maximum: 65536 bytes 00:07:52.359 Persistent Memory Region: Not Supported 00:07:52.359 Optional Asynchronous Events Supported 00:07:52.359 Namespace Attribute Notices: Supported 00:07:52.359 Firmware Activation Notices: Not Supported 00:07:52.359 ANA Change Notices: Not Supported 00:07:52.359 PLE Aggregate Log Change Notices: Not Supported 00:07:52.359 LBA Status Info Alert Notices: Not Supported 00:07:52.359 EGE Aggregate Log Change Notices: Not Supported 00:07:52.359 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.359 Zone Descriptor Change Notices: Not Supported 00:07:52.359 Discovery Log Change Notices: Not Supported 00:07:52.359 Controller Attributes 00:07:52.359 128-bit Host Identifier: Not Supported 00:07:52.359 Non-Operational Permissive Mode: Not Supported 00:07:52.359 NVM Sets: Not Supported 00:07:52.359 Read Recovery Levels: Not Supported 00:07:52.359 Endurance Groups: Supported 00:07:52.359 Predictable Latency Mode: Not Supported 00:07:52.359 Traffic Based Keep ALive: Not Supported 00:07:52.359 Namespace Granularity: Not Supported 00:07:52.359 SQ Associations: Not Supported 00:07:52.359 UUID List: Not Supported 00:07:52.359 Multi-Domain Subsystem: Not Supported 00:07:52.359 Fixed Capacity Management: Not Supported 00:07:52.359 Variable Capacity Management: Not Supported 00:07:52.359 Delete Endurance Group: Not Supported 00:07:52.359 Delete NVM Set: Not Supported 00:07:52.359 Extended LBA Formats Supported: Supported 00:07:52.359 Flexible Data Placement Supported: Supported 00:07:52.359 00:07:52.359 Controller Memory Buffer Support 00:07:52.359 ================================ 00:07:52.359 Supported: No 00:07:52.359 00:07:52.359 Persistent Memory Region Support 00:07:52.359 ================================ 00:07:52.359 Supported: No 00:07:52.359 00:07:52.359 Admin Command Set Attributes 00:07:52.359 ============================ 00:07:52.359 Security Send/Receive: Not Supported 00:07:52.359 Format NVM: Supported 00:07:52.359 Firmware Activate/Download: Not Supported 00:07:52.359 Namespace Management: Supported 00:07:52.359 Device Self-Test: Not Supported 00:07:52.359 Directives: Supported 00:07:52.359 NVMe-MI: Not Supported 00:07:52.359 Virtualization Management: Not Supported 00:07:52.359 Doorbell Buffer Config: Supported 00:07:52.359 Get LBA Status Capability: Not Supported 00:07:52.359 Command & Feature Lockdown Capability: Not Supported 00:07:52.359 Abort Command Limit: 4 00:07:52.359 Async Event Request Limit: 4 00:07:52.359 Number of Firmware Slots: N/A 00:07:52.359 Firmware Slot 1 Read-Only: N/A 00:07:52.359 Firmware Activation Without Reset: N/A 00:07:52.359 Multiple Update Detection Support: N/A 00:07:52.359 Firmware Update Granularity: No Information Provided 00:07:52.359 Per-Namespace SMART Log: Yes 00:07:52.359 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.359 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:52.359 Command Effects Log Page: Supported 00:07:52.359 Get Log Page Extended Data: Supported 00:07:52.359 Telemetry Log Pages: Not Supported 00:07:52.359 Persistent Event Log Pages: Not Supported 00:07:52.359 Supported Log Pages Log Page: May Support 00:07:52.359 Commands Supported & Effects Log Page: Not Supported 00:07:52.359 Feature Identifiers & Effects Log Page:May Support 00:07:52.359 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.359 Data Area 4 for Telemetry Log: Not Supported 00:07:52.359 Error Log Page Entries Supported: 1 00:07:52.359 Keep Alive: Not Supported 00:07:52.359 00:07:52.359 NVM Command Set Attributes 00:07:52.359 ========================== 00:07:52.359 Submission Queue Entry Size 00:07:52.359 Max: 64 00:07:52.359 Min: 64 00:07:52.359 Completion Queue Entry Size 00:07:52.359 Max: 16 00:07:52.359 Min: 16 00:07:52.359 Number of Namespaces: 256 00:07:52.360 Compare Command: Supported 00:07:52.360 Write Uncorrectable Command: Not Supported 00:07:52.360 Dataset Management Command: Supported 00:07:52.360 Write Zeroes Command: Supported 00:07:52.360 Set Features Save Field: Supported 00:07:52.360 Reservations: Not Supported 00:07:52.360 Timestamp: Supported 00:07:52.360 Copy: Supported 00:07:52.360 Volatile Write Cache: Present 00:07:52.360 Atomic Write Unit (Normal): 1 00:07:52.360 Atomic Write Unit (PFail): 1 00:07:52.360 Atomic Compare & Write Unit: 1 00:07:52.360 Fused Compare & Write: Not Supported 00:07:52.360 Scatter-Gather List 00:07:52.360 SGL Command Set: Supported 00:07:52.360 SGL Keyed: Not Supported 00:07:52.360 SGL Bit Bucket Descriptor: Not Supported 00:07:52.360 SGL Metadata Pointer: Not Supported 00:07:52.360 Oversized SGL: Not Supported 00:07:52.360 SGL Metadata Address: Not Supported 00:07:52.360 SGL Offset: Not Supported 00:07:52.360 Transport SGL Data Block: Not Supported 00:07:52.360 Replay Protected Memory Block: Not Supported 00:07:52.360 00:07:52.360 Firmware Slot Information 00:07:52.360 ========================= 00:07:52.360 Active slot: 1 00:07:52.360 Slot 1 Firmware Revision: 1.0 00:07:52.360 00:07:52.360 00:07:52.360 Commands Supported and Effects 00:07:52.360 ============================== 00:07:52.360 Admin Commands 00:07:52.360 -------------- 00:07:52.360 Delete I/O Submission Queue (00h): Supported 00:07:52.360 Create I/O Submission Queue (01h): Supported 00:07:52.360 Get Log Page (02h): Supported 00:07:52.360 Delete I/O Completion Queue (04h): Supported 00:07:52.360 Create I/O Completion Queue (05h): Supported 00:07:52.360 Identify (06h): Supported 00:07:52.360 Abort (08h): Supported 00:07:52.360 Set Features (09h): Supported 00:07:52.360 Get Features (0Ah): Supported 00:07:52.360 Asynchronous Event Request (0Ch): Supported 00:07:52.360 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.360 Directive Send (19h): Supported 00:07:52.360 Directive Receive (1Ah): Supported 00:07:52.360 Virtualization Management (1Ch): Supported 00:07:52.360 Doorbell Buffer Config (7Ch): Supported 00:07:52.360 Format NVM (80h): Supported LBA-Change 00:07:52.360 I/O Commands 00:07:52.360 ------------ 00:07:52.360 Flush (00h): Supported LBA-Change 00:07:52.360 Write (01h): Supported LBA-Change 00:07:52.360 Read (02h): Supported 00:07:52.360 Compare (05h): Supported 00:07:52.360 Write Zeroes (08h): Supported LBA-Change 00:07:52.360 Dataset Management (09h): Supported LBA-Change 00:07:52.360 Unknown (0Ch): Supported 00:07:52.360 Unknown (12h): Supported 00:07:52.360 Copy (19h): Supported LBA-Change 00:07:52.360 Unknown (1Dh): Supported LBA-Change 00:07:52.360 00:07:52.360 Error Log 00:07:52.360 ========= 00:07:52.360 00:07:52.360 Arbitration 00:07:52.360 =========== 00:07:52.360 Arbitration Burst: no limit 00:07:52.360 00:07:52.360 Power Management 00:07:52.360 ================ 00:07:52.360 Number of Power States: 1 00:07:52.360 Current Power State: Power State #0 00:07:52.360 Power State #0: 00:07:52.360 Max Power: 25.00 W 00:07:52.360 Non-Operational State: Operational 00:07:52.360 Entry Latency: 16 microseconds 00:07:52.360 Exit Latency: 4 microseconds 00:07:52.360 Relative Read Throughput: 0 00:07:52.360 Relative Read Latency: 0 00:07:52.360 Relative Write Throughput: 0 00:07:52.360 Relative Write Latency: 0 00:07:52.360 Idle Power: Not Reported 00:07:52.360 Active Power: Not Reported 00:07:52.360 Non-Operational Permissive Mode: Not Supported 00:07:52.360 00:07:52.360 Health Information 00:07:52.360 ================== 00:07:52.360 Critical Warnings: 00:07:52.360 Available Spare Space: OK 00:07:52.360 Temperature: OK 00:07:52.360 Device Reliability: OK 00:07:52.360 Read Only: No 00:07:52.360 Volatile Memory Backup: OK 00:07:52.360 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.360 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.360 Available Spare: 0% 00:07:52.360 Available Spare Threshold: 0% 00:07:52.360 Life Percentage Used: 0% 00:07:52.360 Data Units Read: 832 00:07:52.360 Data Units Written: 761 00:07:52.360 Host Read Commands: 37237 00:07:52.360 Host Write Commands: 36660 00:07:52.360 Controller Busy Time: 0 minutes 00:07:52.360 Power Cycles: 0 00:07:52.360 Power On Hours: 0 hours 00:07:52.360 Unsafe Shutdowns: 0 00:07:52.360 Unrecoverable Media Errors: 0 00:07:52.360 Lifetime Error Log Entries: 0 00:07:52.360 Warning Temperature Time: 0 minutes 00:07:52.360 Critical Temperature Time: 0 minutes 00:07:52.360 00:07:52.360 Number of Queues 00:07:52.360 ================ 00:07:52.360 Number of I/O Submission Queues: 64 00:07:52.360 Number of I/O Completion Queues: 64 00:07:52.360 00:07:52.360 ZNS Specific Controller Data 00:07:52.360 ============================ 00:07:52.360 Zone Append Size Limit: 0 00:07:52.360 00:07:52.360 00:07:52.360 Active Namespaces 00:07:52.360 ================= 00:07:52.360 Namespace ID:1 00:07:52.360 Error Recovery Timeout: Unlimited 00:07:52.360 Command Set Identifier: NVM (00h) 00:07:52.360 Deallocate: Supported 00:07:52.360 Deallocated/Unwritten Error: Supported 00:07:52.360 Deallocated Read Value: All 0x00 00:07:52.360 Deallocate in Write Zeroes: Not Supported 00:07:52.360 Deallocated Guard Field: 0xFFFF 00:07:52.360 Flush: Supported 00:07:52.360 Reservation: Not Supported 00:07:52.360 Namespace Sharing Capabilities: Multiple Controllers 00:07:52.360 Size (in LBAs): 262144 (1GiB) 00:07:52.360 Capacity (in LBAs): 262144 (1GiB) 00:07:52.360 Utilization (in LBAs): 262144 (1GiB) 00:07:52.360 Thin Provisioning: Not Supported 00:07:52.360 Per-NS Atomic Units: No 00:07:52.360 Maximum Single Source Range Length: 128 00:07:52.360 Maximum Copy Length: 128 00:07:52.360 Maximum Source Range Count: 128 00:07:52.360 NGUID/EUI64 Never Reused: No 00:07:52.360 Namespace Write Protected: No 00:07:52.360 Endurance group ID: 1 00:07:52.360 Number of LBA Formats: 8 00:07:52.360 Current LBA Format: LBA Format #04 00:07:52.360 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.360 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.360 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.360 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.360 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.360 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.360 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.360 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.360 00:07:52.360 Get Feature FDP: 00:07:52.360 ================ 00:07:52.360 Enabled: Yes 00:07:52.360 FDP configuration index: 0 00:07:52.360 00:07:52.360 FDP configurations log page 00:07:52.360 =========================== 00:07:52.360 Number of FDP configurations: 1 00:07:52.360 Version: 0 00:07:52.360 Size: 112 00:07:52.360 FDP Configuration Descriptor: 0 00:07:52.360 Descriptor Size: 96 00:07:52.360 Reclaim Group Identifier format: 2 00:07:52.360 FDP Volatile Write Cache: Not Present 00:07:52.360 FDP Configuration: Valid 00:07:52.360 Vendor Specific Size: 0 00:07:52.360 Number of Reclaim Groups: 2 00:07:52.360 Number of Recalim Unit Handles: 8 00:07:52.360 Max Placement Identifiers: 128 00:07:52.360 Number of Namespaces Suppprted: 256 00:07:52.360 Reclaim unit Nominal Size: 6000000 bytes 00:07:52.360 Estimated Reclaim Unit Time Limit: Not Reported 00:07:52.360 RUH Desc #000: RUH Type: Initially Isolated 00:07:52.360 RUH Desc #001: RUH Type: Initially Isolated 00:07:52.360 RUH Desc #002: RUH Type: Initially Isolated 00:07:52.360 RUH Desc #003: RUH Type: Initially Isolated 00:07:52.360 RUH Desc #004: RUH Type: Initially Isolated 00:07:52.360 RUH Desc #005: RUH Type: Initially Isolated 00:07:52.360 RUH Desc #006: RUH Type: Initially Isolated 00:07:52.360 RUH Desc #007: RUH Type: Initially Isolated 00:07:52.360 00:07:52.360 FDP reclaim unit handle usage log page 00:07:52.360 ==================================[2024-12-16 10:39:52.190449] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 74728 terminated unexpected 00:07:52.360 [2024-12-16 10:39:52.192856] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 74728 terminated unexpected 00:07:52.360 ==== 00:07:52.360 Number of Reclaim Unit Handles: 8 00:07:52.360 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:52.360 RUH Usage Desc #001: RUH Attributes: Unused 00:07:52.360 RUH Usage Desc #002: RUH Attributes: Unused 00:07:52.360 RUH Usage Desc #003: RUH Attributes: Unused 00:07:52.360 RUH Usage Desc #004: RUH Attributes: Unused 00:07:52.360 RUH Usage Desc #005: RUH Attributes: Unused 00:07:52.360 RUH Usage Desc #006: RUH Attributes: Unused 00:07:52.360 RUH Usage Desc #007: RUH Attributes: Unused 00:07:52.360 00:07:52.360 FDP statistics log page 00:07:52.360 ======================= 00:07:52.360 Host bytes with metadata written: 474062848 00:07:52.360 Media bytes with metadata written: 474116096 00:07:52.360 Media bytes erased: 0 00:07:52.360 00:07:52.360 FDP events log page 00:07:52.360 =================== 00:07:52.360 Number of FDP events: 0 00:07:52.360 00:07:52.360 NVM Specific Namespace Data 00:07:52.360 =========================== 00:07:52.360 Logical Block Storage Tag Mask: 0 00:07:52.361 Protection Information Capabilities: 00:07:52.361 16b Guard Protection Information Storage Tag Support: No 00:07:52.361 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.361 Storage Tag Check Read Support: No 00:07:52.361 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.361 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.361 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.361 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.361 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.361 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.361 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.361 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.361 ===================================================== 00:07:52.361 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.361 ===================================================== 00:07:52.361 Controller Capabilities/Features 00:07:52.361 ================================ 00:07:52.361 Vendor ID: 1b36 00:07:52.361 Subsystem Vendor ID: 1af4 00:07:52.361 Serial Number: 12340 00:07:52.361 Model Number: QEMU NVMe Ctrl 00:07:52.361 Firmware Version: 8.0.0 00:07:52.361 Recommended Arb Burst: 6 00:07:52.361 IEEE OUI Identifier: 00 54 52 00:07:52.361 Multi-path I/O 00:07:52.361 May have multiple subsystem ports: No 00:07:52.361 May have multiple controllers: No 00:07:52.361 Associated with SR-IOV VF: No 00:07:52.361 Max Data Transfer Size: 524288 00:07:52.361 Max Number of Namespaces: 256 00:07:52.361 Max Number of I/O Queues: 64 00:07:52.361 NVMe Specification Version (VS): 1.4 00:07:52.361 NVMe Specification Version (Identify): 1.4 00:07:52.361 Maximum Queue Entries: 2048 00:07:52.361 Contiguous Queues Required: Yes 00:07:52.361 Arbitration Mechanisms Supported 00:07:52.361 Weighted Round Robin: Not Supported 00:07:52.361 Vendor Specific: Not Supported 00:07:52.361 Reset Timeout: 7500 ms 00:07:52.361 Doorbell Stride: 4 bytes 00:07:52.361 NVM Subsystem Reset: Not Supported 00:07:52.361 Command Sets Supported 00:07:52.361 NVM Command Set: Supported 00:07:52.361 Boot Partition: Not Supported 00:07:52.361 Memory Page Size Minimum: 4096 bytes 00:07:52.361 Memory Page Size Maximum: 65536 bytes 00:07:52.361 Persistent Memory Region: Not Supported 00:07:52.361 Optional Asynchronous Events Supported 00:07:52.361 Namespace Attribute Notices: Supported 00:07:52.361 Firmware Activation Notices: Not Supported 00:07:52.361 ANA Change Notices: Not Supported 00:07:52.361 PLE Aggregate Log Change Notices: Not Supported 00:07:52.361 LBA Status Info Alert Notices: Not Supported 00:07:52.361 EGE Aggregate Log Change Notices: Not Supported 00:07:52.361 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.361 Zone Descriptor Change Notices: Not Supported 00:07:52.361 Discovery Log Change Notices: Not Supported 00:07:52.361 Controller Attributes 00:07:52.361 128-bit Host Identifier: Not Supported 00:07:52.361 Non-Operational Permissive Mode: Not Supported 00:07:52.361 NVM Sets: Not Supported 00:07:52.361 Read Recovery Levels: Not Supported 00:07:52.361 Endurance Groups: Not Supported 00:07:52.361 Predictable Latency Mode: Not Supported 00:07:52.361 Traffic Based Keep ALive: Not Supported 00:07:52.361 Namespace Granularity: Not Supported 00:07:52.361 SQ Associations: Not Supported 00:07:52.361 UUID List: Not Supported 00:07:52.361 Multi-Domain Subsystem: Not Supported 00:07:52.361 Fixed Capacity Management: Not Supported 00:07:52.361 Variable Capacity Management: Not Supported 00:07:52.361 Delete Endurance Group: Not Supported 00:07:52.361 Delete NVM Set: Not Supported 00:07:52.361 Extended LBA Formats Supported: Supported 00:07:52.361 Flexible Data Placement Supported: Not Supported 00:07:52.361 00:07:52.361 Controller Memory Buffer Support 00:07:52.361 ================================ 00:07:52.361 Supported: No 00:07:52.361 00:07:52.361 Persistent Memory Region Support 00:07:52.361 ================================ 00:07:52.361 Supported: No 00:07:52.361 00:07:52.361 Admin Command Set Attributes 00:07:52.361 ============================ 00:07:52.361 Security Send/Receive: Not Supported 00:07:52.361 Format NVM: Supported 00:07:52.361 Firmware Activate/Download: Not Supported 00:07:52.361 Namespace Management: Supported 00:07:52.361 Device Self-Test: Not Supported 00:07:52.361 Directives: Supported 00:07:52.361 NVMe-MI: Not Supported 00:07:52.361 Virtualization Management: Not Supported 00:07:52.361 Doorbell Buffer Config: Supported 00:07:52.361 Get LBA Status Capability: Not Supported 00:07:52.361 Command & Feature Lockdown Capability: Not Supported 00:07:52.361 Abort Command Limit: 4 00:07:52.361 Async Event Request Limit: 4 00:07:52.361 Number of Firmware Slots: N/A 00:07:52.361 Firmware Slot 1 Read-Only: N/A 00:07:52.361 Firmware Activation Without Reset: N/A 00:07:52.361 Multiple Update Detection Support: N/A 00:07:52.361 Firmware Update Granularity: No Information Provided 00:07:52.361 Per-Namespace SMART Log: Yes 00:07:52.361 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.361 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:52.361 Command Effects Log Page: Supported 00:07:52.361 Get Log Page Extended Data: Supported 00:07:52.361 Telemetry Log Pages: Not Supported 00:07:52.361 Persistent Event Log Pages: Not Supported 00:07:52.361 Supported Log Pages Log Page: May Support 00:07:52.361 Commands Supported & Effects Log Page: Not Supported 00:07:52.361 Feature Identifiers & Effects Log Page:May Support 00:07:52.361 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.361 Data Area 4 for Telemetry Log: Not Supported 00:07:52.361 Error Log Page Entries Supported: 1 00:07:52.361 Keep Alive: Not Supported 00:07:52.361 00:07:52.361 NVM Command Set Attributes 00:07:52.361 ========================== 00:07:52.361 Submission Queue Entry Size 00:07:52.361 Max: 64 00:07:52.361 Min: 64 00:07:52.361 Completion Queue Entry Size 00:07:52.361 Max: 16 00:07:52.361 Min: 16 00:07:52.361 Number of Namespaces: 256 00:07:52.361 Compare Command: Supported 00:07:52.361 Write Uncorrectable Command: Not Supported 00:07:52.361 Dataset Management Command: Supported 00:07:52.361 Write Zeroes Command: Supported 00:07:52.361 Set Features Save Field: Supported 00:07:52.361 Reservations: Not Supported 00:07:52.361 Timestamp: Supported 00:07:52.361 Copy: Supported 00:07:52.361 Volatile Write Cache: Present 00:07:52.361 Atomic Write Unit (Normal): 1 00:07:52.361 Atomic Write Unit (PFail): 1 00:07:52.361 Atomic Compare & Write Unit: 1 00:07:52.361 Fused Compare & Write: Not Supported 00:07:52.361 Scatter-Gather List 00:07:52.361 SGL Command Set: Supported 00:07:52.361 SGL Keyed: Not Supported 00:07:52.361 SGL Bit Bucket Descriptor: Not Supported 00:07:52.361 SGL Metadata Pointer: Not Supported 00:07:52.361 Oversized SGL: Not Supported 00:07:52.361 SGL Metadata Address: Not Supported 00:07:52.361 SGL Offset: Not Supported 00:07:52.361 Transport SGL Data Block: Not Supported 00:07:52.361 Replay Protected Memory Block: Not Supported 00:07:52.361 00:07:52.361 Firmware Slot Information 00:07:52.361 ========================= 00:07:52.361 Active slot: 1 00:07:52.361 Slot 1 Firmware Revision: 1.0 00:07:52.361 00:07:52.361 00:07:52.361 Commands Supported and Effects 00:07:52.361 ============================== 00:07:52.361 Admin Commands 00:07:52.361 -------------- 00:07:52.361 Delete I/O Submission Queue (00h): Supported 00:07:52.361 Create I/O Submission Queue (01h): Supported 00:07:52.361 Get Log Page (02h): Supported 00:07:52.361 Delete I/O Completion Queue (04h): Supported 00:07:52.361 Create I/O Completion Queue (05h): Supported 00:07:52.361 Identify (06h): Supported 00:07:52.361 Abort (08h): Supported 00:07:52.361 Set Features (09h): Supported 00:07:52.361 Get Features (0Ah): Supported 00:07:52.361 Asynchronous Event Request (0Ch): Supported 00:07:52.361 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.361 Directive Send (19h): Supported 00:07:52.361 Directive Receive (1Ah): Supported 00:07:52.361 Virtualization Management (1Ch): Supported 00:07:52.361 Doorbell Buffer Config (7Ch): Supported 00:07:52.361 Format NVM (80h): Supported LBA-Change 00:07:52.361 I/O Commands 00:07:52.361 ------------ 00:07:52.361 Flush (00h): Supported LBA-Change 00:07:52.361 Write (01h): Supported LBA-Change 00:07:52.361 Read (02h): Supported 00:07:52.361 Compare (05h): Supported 00:07:52.361 Write Zeroes (08h): Supported LBA-Change 00:07:52.361 Dataset Management (09h): Supported LBA-Change 00:07:52.361 Unknown (0Ch): Supported 00:07:52.361 Unknown (12h): Supported 00:07:52.361 Copy (19h): Supported LBA-Change 00:07:52.361 Unknown (1Dh): Supported LBA-Change 00:07:52.361 00:07:52.361 Error Log 00:07:52.361 ========= 00:07:52.361 00:07:52.361 Arbitration 00:07:52.361 =========== 00:07:52.361 Arbitration Burst: no limit 00:07:52.361 00:07:52.361 Power Management 00:07:52.361 ================ 00:07:52.361 Number of Power States: 1 00:07:52.361 Current Power State: Power State #0 00:07:52.361 Power State #0: 00:07:52.362 Max Power: 25.00 W 00:07:52.362 Non-Operational State: Operational 00:07:52.362 Entry Latency: 16 microseconds 00:07:52.362 Exit Latency: 4 microseconds 00:07:52.362 Relative Read Throughput: 0 00:07:52.362 Relative Read Latency: 0 00:07:52.362 Relative Write Throughput: 0 00:07:52.362 Relative Write Latency: 0 00:07:52.362 Idle Power: Not Reported 00:07:52.362 Active Power: Not Reported 00:07:52.362 Non-Operational Permissive Mode: Not Supported 00:07:52.362 00:07:52.362 Health Information 00:07:52.362 ================== 00:07:52.362 Critical Warnings: 00:07:52.362 Available Spare Space: OK 00:07:52.362 Temperature: OK 00:07:52.362 Device Reliability: OK 00:07:52.362 Read Only: No 00:07:52.362 Volatile Memory Backup: OK 00:07:52.362 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.362 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.362 Available Spare: 0% 00:07:52.362 Available Spare Threshold: 0% 00:07:52.362 Life Percentage Used: 0% 00:07:52.362 Data Units Read: 670 00:07:52.362 Data Units Written: 599 00:07:52.362 Host Read Commands: 35677 00:07:52.362 Host Write Commands: 35463 00:07:52.362 Controller Busy Time: 0 minutes 00:07:52.362 Power Cycles: 0 00:07:52.362 Power On Hours: 0 hours 00:07:52.362 Unsafe Shutdowns: 0 00:07:52.362 Unrecoverable Media Errors: 0 00:07:52.362 Lifetime Error Log Entries: 0 00:07:52.362 Warning Temperature Time: 0 minutes 00:07:52.362 Critical Temperature Time: 0 minutes 00:07:52.362 00:07:52.362 Number of Queues 00:07:52.362 ================ 00:07:52.362 Number of I/O Submission Queues: 64 00:07:52.362 Number of I/O Completion Queues: 64 00:07:52.362 00:07:52.362 ZNS Specific Controller Data 00:07:52.362 ============================ 00:07:52.362 Zone Append Size Limit: 0 00:07:52.362 00:07:52.362 00:07:52.362 Active Namespaces 00:07:52.362 ================= 00:07:52.362 Namespace ID:1 00:07:52.362 Error Recovery Timeout: Unlimited 00:07:52.362 Command Set Identifier: NVM (00h) 00:07:52.362 Deallocate: Supported 00:07:52.362 Deallocated/Unwritten Error: Supported 00:07:52.362 Deallocated Read Value: All 0x00 00:07:52.362 Deallocate in Write Zeroes: Not Supported 00:07:52.362 Deallocated Guard Field: 0xFFFF 00:07:52.362 Flush: Supported 00:07:52.362 Reservation: Not Supported 00:07:52.362 Metadata Transferred as: Separate Metadata Buffer 00:07:52.362 Namespace Sharing Capabilities: Private 00:07:52.362 Size (in LBAs): 1548666 (5GiB) 00:07:52.362 Capacity (in LBAs): 1548666 (5GiB) 00:07:52.362 Utilization (in LBAs): 1548666 (5GiB) 00:07:52.362 Thin Provisioning: Not Supported 00:07:52.362 Per-NS Atomic Units: No 00:07:52.362 Maximum Single Source Range Length: 128 00:07:52.362 Maximum Copy Length: 128 00:07:52.362 Maximum Source Range Count: 128 00:07:52.362 NGUID/EUI64 Never Reused: No 00:07:52.362 Namespace Write Protected: No 00:07:52.362 Number of LBA Formats: 8 00:07:52.362 Current LBA Format: [2024-12-16 10:39:52.194783] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 74728 terminated unexpected 00:07:52.362 LBA Format #07 00:07:52.362 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.362 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.362 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.362 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.362 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.362 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.362 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.362 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.362 00:07:52.362 NVM Specific Namespace Data 00:07:52.362 =========================== 00:07:52.362 Logical Block Storage Tag Mask: 0 00:07:52.362 Protection Information Capabilities: 00:07:52.362 16b Guard Protection Information Storage Tag Support: No 00:07:52.362 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.362 Storage Tag Check Read Support: No 00:07:52.362 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.362 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.362 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.362 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.362 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.362 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.362 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.362 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.362 ===================================================== 00:07:52.362 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.362 ===================================================== 00:07:52.362 Controller Capabilities/Features 00:07:52.362 ================================ 00:07:52.362 Vendor ID: 1b36 00:07:52.362 Subsystem Vendor ID: 1af4 00:07:52.362 Serial Number: 12341 00:07:52.362 Model Number: QEMU NVMe Ctrl 00:07:52.362 Firmware Version: 8.0.0 00:07:52.362 Recommended Arb Burst: 6 00:07:52.362 IEEE OUI Identifier: 00 54 52 00:07:52.362 Multi-path I/O 00:07:52.362 May have multiple subsystem ports: No 00:07:52.362 May have multiple controllers: No 00:07:52.362 Associated with SR-IOV VF: No 00:07:52.362 Max Data Transfer Size: 524288 00:07:52.362 Max Number of Namespaces: 256 00:07:52.362 Max Number of I/O Queues: 64 00:07:52.362 NVMe Specification Version (VS): 1.4 00:07:52.362 NVMe Specification Version (Identify): 1.4 00:07:52.362 Maximum Queue Entries: 2048 00:07:52.362 Contiguous Queues Required: Yes 00:07:52.362 Arbitration Mechanisms Supported 00:07:52.362 Weighted Round Robin: Not Supported 00:07:52.362 Vendor Specific: Not Supported 00:07:52.362 Reset Timeout: 7500 ms 00:07:52.362 Doorbell Stride: 4 bytes 00:07:52.362 NVM Subsystem Reset: Not Supported 00:07:52.362 Command Sets Supported 00:07:52.362 NVM Command Set: Supported 00:07:52.362 Boot Partition: Not Supported 00:07:52.362 Memory Page Size Minimum: 4096 bytes 00:07:52.362 Memory Page Size Maximum: 65536 bytes 00:07:52.362 Persistent Memory Region: Not Supported 00:07:52.362 Optional Asynchronous Events Supported 00:07:52.362 Namespace Attribute Notices: Supported 00:07:52.362 Firmware Activation Notices: Not Supported 00:07:52.362 ANA Change Notices: Not Supported 00:07:52.362 PLE Aggregate Log Change Notices: Not Supported 00:07:52.362 LBA Status Info Alert Notices: Not Supported 00:07:52.362 EGE Aggregate Log Change Notices: Not Supported 00:07:52.362 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.362 Zone Descriptor Change Notices: Not Supported 00:07:52.362 Discovery Log Change Notices: Not Supported 00:07:52.362 Controller Attributes 00:07:52.362 128-bit Host Identifier: Not Supported 00:07:52.362 Non-Operational Permissive Mode: Not Supported 00:07:52.362 NVM Sets: Not Supported 00:07:52.362 Read Recovery Levels: Not Supported 00:07:52.362 Endurance Groups: Not Supported 00:07:52.362 Predictable Latency Mode: Not Supported 00:07:52.362 Traffic Based Keep ALive: Not Supported 00:07:52.362 Namespace Granularity: Not Supported 00:07:52.362 SQ Associations: Not Supported 00:07:52.362 UUID List: Not Supported 00:07:52.362 Multi-Domain Subsystem: Not Supported 00:07:52.362 Fixed Capacity Management: Not Supported 00:07:52.362 Variable Capacity Management: Not Supported 00:07:52.362 Delete Endurance Group: Not Supported 00:07:52.362 Delete NVM Set: Not Supported 00:07:52.362 Extended LBA Formats Supported: Supported 00:07:52.362 Flexible Data Placement Supported: Not Supported 00:07:52.362 00:07:52.362 Controller Memory Buffer Support 00:07:52.362 ================================ 00:07:52.362 Supported: No 00:07:52.362 00:07:52.362 Persistent Memory Region Support 00:07:52.362 ================================ 00:07:52.362 Supported: No 00:07:52.362 00:07:52.362 Admin Command Set Attributes 00:07:52.362 ============================ 00:07:52.362 Security Send/Receive: Not Supported 00:07:52.362 Format NVM: Supported 00:07:52.362 Firmware Activate/Download: Not Supported 00:07:52.362 Namespace Management: Supported 00:07:52.362 Device Self-Test: Not Supported 00:07:52.362 Directives: Supported 00:07:52.362 NVMe-MI: Not Supported 00:07:52.362 Virtualization Management: Not Supported 00:07:52.362 Doorbell Buffer Config: Supported 00:07:52.362 Get LBA Status Capability: Not Supported 00:07:52.362 Command & Feature Lockdown Capability: Not Supported 00:07:52.362 Abort Command Limit: 4 00:07:52.362 Async Event Request Limit: 4 00:07:52.362 Number of Firmware Slots: N/A 00:07:52.362 Firmware Slot 1 Read-Only: N/A 00:07:52.362 Firmware Activation Without Reset: N/A 00:07:52.362 Multiple Update Detection Support: N/A 00:07:52.362 Firmware Update Granularity: No Information Provided 00:07:52.362 Per-Namespace SMART Log: Yes 00:07:52.362 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.362 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:52.362 Command Effects Log Page: Supported 00:07:52.362 Get Log Page Extended Data: Supported 00:07:52.362 Telemetry Log Pages: Not Supported 00:07:52.362 Persistent Event Log Pages: Not Supported 00:07:52.362 Supported Log Pages Log Page: May Support 00:07:52.362 Commands Supported & Effects Log Page: Not Supported 00:07:52.362 Feature Identifiers & Effects Log Page:May Support 00:07:52.363 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.363 Data Area 4 for Telemetry Log: Not Supported 00:07:52.363 Error Log Page Entries Supported: 1 00:07:52.363 Keep Alive: Not Supported 00:07:52.363 00:07:52.363 NVM Command Set Attributes 00:07:52.363 ========================== 00:07:52.363 Submission Queue Entry Size 00:07:52.363 Max: 64 00:07:52.363 Min: 64 00:07:52.363 Completion Queue Entry Size 00:07:52.363 Max: 16 00:07:52.363 Min: 16 00:07:52.363 Number of Namespaces: 256 00:07:52.363 Compare Command: Supported 00:07:52.363 Write Uncorrectable Command: Not Supported 00:07:52.363 Dataset Management Command: Supported 00:07:52.363 Write Zeroes Command: Supported 00:07:52.363 Set Features Save Field: Supported 00:07:52.363 Reservations: Not Supported 00:07:52.363 Timestamp: Supported 00:07:52.363 Copy: Supported 00:07:52.363 Volatile Write Cache: Present 00:07:52.363 Atomic Write Unit (Normal): 1 00:07:52.363 Atomic Write Unit (PFail): 1 00:07:52.363 Atomic Compare & Write Unit: 1 00:07:52.363 Fused Compare & Write: Not Supported 00:07:52.363 Scatter-Gather List 00:07:52.363 SGL Command Set: Supported 00:07:52.363 SGL Keyed: Not Supported 00:07:52.363 SGL Bit Bucket Descriptor: Not Supported 00:07:52.363 SGL Metadata Pointer: Not Supported 00:07:52.363 Oversized SGL: Not Supported 00:07:52.363 SGL Metadata Address: Not Supported 00:07:52.363 SGL Offset: Not Supported 00:07:52.363 Transport SGL Data Block: Not Supported 00:07:52.363 Replay Protected Memory Block: Not Supported 00:07:52.363 00:07:52.363 Firmware Slot Information 00:07:52.363 ========================= 00:07:52.363 Active slot: 1 00:07:52.363 Slot 1 Firmware Revision: 1.0 00:07:52.363 00:07:52.363 00:07:52.363 Commands Supported and Effects 00:07:52.363 ============================== 00:07:52.363 Admin Commands 00:07:52.363 -------------- 00:07:52.363 Delete I/O Submission Queue (00h): Supported 00:07:52.363 Create I/O Submission Queue (01h): Supported 00:07:52.363 Get Log Page (02h): Supported 00:07:52.363 Delete I/O Completion Queue (04h): Supported 00:07:52.363 Create I/O Completion Queue (05h): Supported 00:07:52.363 Identify (06h): Supported 00:07:52.363 Abort (08h): Supported 00:07:52.363 Set Features (09h): Supported 00:07:52.363 Get Features (0Ah): Supported 00:07:52.363 Asynchronous Event Request (0Ch): Supported 00:07:52.363 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.363 Directive Send (19h): Supported 00:07:52.363 Directive Receive (1Ah): Supported 00:07:52.363 Virtualization Management (1Ch): Supported 00:07:52.363 Doorbell Buffer Config (7Ch): Supported 00:07:52.363 Format NVM (80h): Supported LBA-Change 00:07:52.363 I/O Commands 00:07:52.363 ------------ 00:07:52.363 Flush (00h): Supported LBA-Change 00:07:52.363 Write (01h): Supported LBA-Change 00:07:52.363 Read (02h): Supported 00:07:52.363 Compare (05h): Supported 00:07:52.363 Write Zeroes (08h): Supported LBA-Change 00:07:52.363 Dataset Management (09h): Supported LBA-Change 00:07:52.363 Unknown (0Ch): Supported 00:07:52.363 Unknown (12h): Supported 00:07:52.363 Copy (19h): Supported LBA-Change 00:07:52.363 Unknown (1Dh): Supported LBA-Change 00:07:52.363 00:07:52.363 Error Log 00:07:52.363 ========= 00:07:52.363 00:07:52.363 Arbitration 00:07:52.363 =========== 00:07:52.363 Arbitration Burst: no limit 00:07:52.363 00:07:52.363 Power Management 00:07:52.363 ================ 00:07:52.363 Number of Power States: 1 00:07:52.363 Current Power State: Power State #0 00:07:52.363 Power State #0: 00:07:52.363 Max Power: 25.00 W 00:07:52.363 Non-Operational State: Operational 00:07:52.363 Entry Latency: 16 microseconds 00:07:52.363 Exit Latency: 4 microseconds 00:07:52.363 Relative Read Throughput: 0 00:07:52.363 Relative Read Latency: 0 00:07:52.363 Relative Write Throughput: 0 00:07:52.363 Relative Write Latency: 0 00:07:52.363 Idle Power: Not Reported 00:07:52.363 Active Power: Not Reported 00:07:52.363 Non-Operational Permissive Mode: Not Supported 00:07:52.363 00:07:52.363 Health Information 00:07:52.363 ================== 00:07:52.363 Critical Warnings: 00:07:52.363 Available Spare Space: OK 00:07:52.363 Temperature: OK 00:07:52.363 Device Reliability: OK 00:07:52.363 Read Only: No 00:07:52.363 Volatile Memory Backup: OK 00:07:52.363 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.363 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.363 Available Spare: 0% 00:07:52.363 Available Spare Threshold: 0% 00:07:52.363 Life Percentage Used: 0% 00:07:52.363 Data Units Read: 1021 00:07:52.363 Data Units Written: 880 00:07:52.363 Host Read Commands: 52599 00:07:52.363 Host Write Commands: 51287 00:07:52.363 Controller Busy Time: 0 minutes 00:07:52.363 Power Cycles: 0 00:07:52.363 Power On Hours: 0 hours 00:07:52.363 Unsafe Shutdowns: 0 00:07:52.363 Unrecoverable Media Errors: 0 00:07:52.363 Lifetime Error Log Entries: 0 00:07:52.363 Warning Temperature Time: 0 minutes 00:07:52.363 Critical Temperature Time: 0 minutes 00:07:52.363 00:07:52.363 Number of Queues 00:07:52.363 ================ 00:07:52.363 Number of I/O Submission Queues: 64 00:07:52.363 Number of I/O Completion Queues: 64 00:07:52.363 00:07:52.363 ZNS Specific Controller Data 00:07:52.363 ============================ 00:07:52.363 Zone Append Size Limit: 0 00:07:52.363 00:07:52.363 00:07:52.363 Active Namespaces 00:07:52.363 ================= 00:07:52.363 Namespace ID:1 00:07:52.363 Error Recovery Timeout: Unlimited 00:07:52.363 Command Set Identifier: NVM (00h) 00:07:52.363 Deallocate: Supported 00:07:52.363 Deallocated/Unwritten Error: Supported 00:07:52.363 Deallocated Read Value: All 0x00 00:07:52.363 Deallocate in Write Zeroes: Not Supported 00:07:52.363 Deallocated Guard Field: 0xFFFF 00:07:52.363 Flush: Supported 00:07:52.363 Reservation: Not Supported 00:07:52.363 Namespace Sharing Capabilities: Private 00:07:52.363 Size (in LBAs): 1310720 (5GiB) 00:07:52.363 Capacity (in LBAs): 1310720 (5GiB) 00:07:52.363 Utilization (in LBAs): 1310720 (5GiB) 00:07:52.363 Thin Provisioning: Not Supported 00:07:52.363 Per-NS Atomic Units: No 00:07:52.363 Maximum Single Source Range Length: 128 00:07:52.363 Maximum Copy Length: 128 00:07:52.363 Maximum Source Range Count: 128 00:07:52.363 NGUID/EUI64 Never Reused: No 00:07:52.363 Namespace Write Protected: No 00:07:52.363 Number of LBA Formats: 8 00:07:52.363 Current LBA Format: LBA Format #04 00:07:52.363 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.363 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.363 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.363 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.363 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.363 LBA Forma[2024-12-16 10:39:52.196710] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 74728 terminated unexpected 00:07:52.363 t #05: Data Size: 4096 Metadata Size: 8 00:07:52.363 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.363 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.363 00:07:52.363 NVM Specific Namespace Data 00:07:52.363 =========================== 00:07:52.363 Logical Block Storage Tag Mask: 0 00:07:52.363 Protection Information Capabilities: 00:07:52.363 16b Guard Protection Information Storage Tag Support: No 00:07:52.363 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.363 Storage Tag Check Read Support: No 00:07:52.363 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.363 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.363 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.363 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.363 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.363 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.363 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.363 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.363 ===================================================== 00:07:52.363 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.363 ===================================================== 00:07:52.363 Controller Capabilities/Features 00:07:52.363 ================================ 00:07:52.363 Vendor ID: 1b36 00:07:52.363 Subsystem Vendor ID: 1af4 00:07:52.363 Serial Number: 12342 00:07:52.363 Model Number: QEMU NVMe Ctrl 00:07:52.363 Firmware Version: 8.0.0 00:07:52.363 Recommended Arb Burst: 6 00:07:52.363 IEEE OUI Identifier: 00 54 52 00:07:52.363 Multi-path I/O 00:07:52.363 May have multiple subsystem ports: No 00:07:52.363 May have multiple controllers: No 00:07:52.363 Associated with SR-IOV VF: No 00:07:52.363 Max Data Transfer Size: 524288 00:07:52.363 Max Number of Namespaces: 256 00:07:52.363 Max Number of I/O Queues: 64 00:07:52.363 NVMe Specification Version (VS): 1.4 00:07:52.363 NVMe Specification Version (Identify): 1.4 00:07:52.363 Maximum Queue Entries: 2048 00:07:52.363 Contiguous Queues Required: Yes 00:07:52.363 Arbitration Mechanisms Supported 00:07:52.363 Weighted Round Robin: Not Supported 00:07:52.363 Vendor Specific: Not Supported 00:07:52.364 Reset Timeout: 7500 ms 00:07:52.364 Doorbell Stride: 4 bytes 00:07:52.364 NVM Subsystem Reset: Not Supported 00:07:52.364 Command Sets Supported 00:07:52.364 NVM Command Set: Supported 00:07:52.364 Boot Partition: Not Supported 00:07:52.364 Memory Page Size Minimum: 4096 bytes 00:07:52.364 Memory Page Size Maximum: 65536 bytes 00:07:52.364 Persistent Memory Region: Not Supported 00:07:52.364 Optional Asynchronous Events Supported 00:07:52.364 Namespace Attribute Notices: Supported 00:07:52.364 Firmware Activation Notices: Not Supported 00:07:52.364 ANA Change Notices: Not Supported 00:07:52.364 PLE Aggregate Log Change Notices: Not Supported 00:07:52.364 LBA Status Info Alert Notices: Not Supported 00:07:52.364 EGE Aggregate Log Change Notices: Not Supported 00:07:52.364 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.364 Zone Descriptor Change Notices: Not Supported 00:07:52.364 Discovery Log Change Notices: Not Supported 00:07:52.364 Controller Attributes 00:07:52.364 128-bit Host Identifier: Not Supported 00:07:52.364 Non-Operational Permissive Mode: Not Supported 00:07:52.364 NVM Sets: Not Supported 00:07:52.364 Read Recovery Levels: Not Supported 00:07:52.364 Endurance Groups: Not Supported 00:07:52.364 Predictable Latency Mode: Not Supported 00:07:52.364 Traffic Based Keep ALive: Not Supported 00:07:52.364 Namespace Granularity: Not Supported 00:07:52.364 SQ Associations: Not Supported 00:07:52.364 UUID List: Not Supported 00:07:52.364 Multi-Domain Subsystem: Not Supported 00:07:52.364 Fixed Capacity Management: Not Supported 00:07:52.364 Variable Capacity Management: Not Supported 00:07:52.364 Delete Endurance Group: Not Supported 00:07:52.364 Delete NVM Set: Not Supported 00:07:52.364 Extended LBA Formats Supported: Supported 00:07:52.364 Flexible Data Placement Supported: Not Supported 00:07:52.364 00:07:52.364 Controller Memory Buffer Support 00:07:52.364 ================================ 00:07:52.364 Supported: No 00:07:52.364 00:07:52.364 Persistent Memory Region Support 00:07:52.364 ================================ 00:07:52.364 Supported: No 00:07:52.364 00:07:52.364 Admin Command Set Attributes 00:07:52.364 ============================ 00:07:52.364 Security Send/Receive: Not Supported 00:07:52.364 Format NVM: Supported 00:07:52.364 Firmware Activate/Download: Not Supported 00:07:52.364 Namespace Management: Supported 00:07:52.364 Device Self-Test: Not Supported 00:07:52.364 Directives: Supported 00:07:52.364 NVMe-MI: Not Supported 00:07:52.364 Virtualization Management: Not Supported 00:07:52.364 Doorbell Buffer Config: Supported 00:07:52.364 Get LBA Status Capability: Not Supported 00:07:52.364 Command & Feature Lockdown Capability: Not Supported 00:07:52.364 Abort Command Limit: 4 00:07:52.364 Async Event Request Limit: 4 00:07:52.364 Number of Firmware Slots: N/A 00:07:52.364 Firmware Slot 1 Read-Only: N/A 00:07:52.364 Firmware Activation Without Reset: N/A 00:07:52.364 Multiple Update Detection Support: N/A 00:07:52.364 Firmware Update Granularity: No Information Provided 00:07:52.364 Per-Namespace SMART Log: Yes 00:07:52.364 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.364 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:52.364 Command Effects Log Page: Supported 00:07:52.364 Get Log Page Extended Data: Supported 00:07:52.364 Telemetry Log Pages: Not Supported 00:07:52.364 Persistent Event Log Pages: Not Supported 00:07:52.364 Supported Log Pages Log Page: May Support 00:07:52.364 Commands Supported & Effects Log Page: Not Supported 00:07:52.364 Feature Identifiers & Effects Log Page:May Support 00:07:52.364 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.364 Data Area 4 for Telemetry Log: Not Supported 00:07:52.364 Error Log Page Entries Supported: 1 00:07:52.364 Keep Alive: Not Supported 00:07:52.364 00:07:52.364 NVM Command Set Attributes 00:07:52.364 ========================== 00:07:52.364 Submission Queue Entry Size 00:07:52.364 Max: 64 00:07:52.364 Min: 64 00:07:52.364 Completion Queue Entry Size 00:07:52.364 Max: 16 00:07:52.364 Min: 16 00:07:52.364 Number of Namespaces: 256 00:07:52.364 Compare Command: Supported 00:07:52.364 Write Uncorrectable Command: Not Supported 00:07:52.364 Dataset Management Command: Supported 00:07:52.364 Write Zeroes Command: Supported 00:07:52.364 Set Features Save Field: Supported 00:07:52.364 Reservations: Not Supported 00:07:52.364 Timestamp: Supported 00:07:52.364 Copy: Supported 00:07:52.364 Volatile Write Cache: Present 00:07:52.364 Atomic Write Unit (Normal): 1 00:07:52.364 Atomic Write Unit (PFail): 1 00:07:52.364 Atomic Compare & Write Unit: 1 00:07:52.364 Fused Compare & Write: Not Supported 00:07:52.364 Scatter-Gather List 00:07:52.364 SGL Command Set: Supported 00:07:52.364 SGL Keyed: Not Supported 00:07:52.364 SGL Bit Bucket Descriptor: Not Supported 00:07:52.364 SGL Metadata Pointer: Not Supported 00:07:52.364 Oversized SGL: Not Supported 00:07:52.364 SGL Metadata Address: Not Supported 00:07:52.364 SGL Offset: Not Supported 00:07:52.364 Transport SGL Data Block: Not Supported 00:07:52.364 Replay Protected Memory Block: Not Supported 00:07:52.364 00:07:52.364 Firmware Slot Information 00:07:52.364 ========================= 00:07:52.364 Active slot: 1 00:07:52.364 Slot 1 Firmware Revision: 1.0 00:07:52.364 00:07:52.364 00:07:52.364 Commands Supported and Effects 00:07:52.364 ============================== 00:07:52.364 Admin Commands 00:07:52.364 -------------- 00:07:52.364 Delete I/O Submission Queue (00h): Supported 00:07:52.364 Create I/O Submission Queue (01h): Supported 00:07:52.364 Get Log Page (02h): Supported 00:07:52.364 Delete I/O Completion Queue (04h): Supported 00:07:52.364 Create I/O Completion Queue (05h): Supported 00:07:52.364 Identify (06h): Supported 00:07:52.364 Abort (08h): Supported 00:07:52.364 Set Features (09h): Supported 00:07:52.364 Get Features (0Ah): Supported 00:07:52.364 Asynchronous Event Request (0Ch): Supported 00:07:52.364 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.364 Directive Send (19h): Supported 00:07:52.364 Directive Receive (1Ah): Supported 00:07:52.364 Virtualization Management (1Ch): Supported 00:07:52.364 Doorbell Buffer Config (7Ch): Supported 00:07:52.364 Format NVM (80h): Supported LBA-Change 00:07:52.364 I/O Commands 00:07:52.364 ------------ 00:07:52.364 Flush (00h): Supported LBA-Change 00:07:52.364 Write (01h): Supported LBA-Change 00:07:52.364 Read (02h): Supported 00:07:52.364 Compare (05h): Supported 00:07:52.364 Write Zeroes (08h): Supported LBA-Change 00:07:52.364 Dataset Management (09h): Supported LBA-Change 00:07:52.364 Unknown (0Ch): Supported 00:07:52.364 Unknown (12h): Supported 00:07:52.364 Copy (19h): Supported LBA-Change 00:07:52.364 Unknown (1Dh): Supported LBA-Change 00:07:52.364 00:07:52.364 Error Log 00:07:52.364 ========= 00:07:52.364 00:07:52.364 Arbitration 00:07:52.364 =========== 00:07:52.364 Arbitration Burst: no limit 00:07:52.364 00:07:52.364 Power Management 00:07:52.364 ================ 00:07:52.364 Number of Power States: 1 00:07:52.364 Current Power State: Power State #0 00:07:52.364 Power State #0: 00:07:52.364 Max Power: 25.00 W 00:07:52.364 Non-Operational State: Operational 00:07:52.364 Entry Latency: 16 microseconds 00:07:52.364 Exit Latency: 4 microseconds 00:07:52.364 Relative Read Throughput: 0 00:07:52.364 Relative Read Latency: 0 00:07:52.364 Relative Write Throughput: 0 00:07:52.364 Relative Write Latency: 0 00:07:52.364 Idle Power: Not Reported 00:07:52.364 Active Power: Not Reported 00:07:52.364 Non-Operational Permissive Mode: Not Supported 00:07:52.365 00:07:52.365 Health Information 00:07:52.365 ================== 00:07:52.365 Critical Warnings: 00:07:52.365 Available Spare Space: OK 00:07:52.365 Temperature: OK 00:07:52.365 Device Reliability: OK 00:07:52.365 Read Only: No 00:07:52.365 Volatile Memory Backup: OK 00:07:52.365 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.365 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.365 Available Spare: 0% 00:07:52.365 Available Spare Threshold: 0% 00:07:52.365 Life Percentage Used: 0% 00:07:52.365 Data Units Read: 2174 00:07:52.365 Data Units Written: 1961 00:07:52.365 Host Read Commands: 109201 00:07:52.365 Host Write Commands: 107471 00:07:52.365 Controller Busy Time: 0 minutes 00:07:52.365 Power Cycles: 0 00:07:52.365 Power On Hours: 0 hours 00:07:52.365 Unsafe Shutdowns: 0 00:07:52.365 Unrecoverable Media Errors: 0 00:07:52.365 Lifetime Error Log Entries: 0 00:07:52.365 Warning Temperature Time: 0 minutes 00:07:52.365 Critical Temperature Time: 0 minutes 00:07:52.365 00:07:52.365 Number of Queues 00:07:52.365 ================ 00:07:52.365 Number of I/O Submission Queues: 64 00:07:52.365 Number of I/O Completion Queues: 64 00:07:52.365 00:07:52.365 ZNS Specific Controller Data 00:07:52.365 ============================ 00:07:52.365 Zone Append Size Limit: 0 00:07:52.365 00:07:52.365 00:07:52.365 Active Namespaces 00:07:52.365 ================= 00:07:52.365 Namespace ID:1 00:07:52.365 Error Recovery Timeout: Unlimited 00:07:52.365 Command Set Identifier: NVM (00h) 00:07:52.365 Deallocate: Supported 00:07:52.365 Deallocated/Unwritten Error: Supported 00:07:52.365 Deallocated Read Value: All 0x00 00:07:52.365 Deallocate in Write Zeroes: Not Supported 00:07:52.365 Deallocated Guard Field: 0xFFFF 00:07:52.365 Flush: Supported 00:07:52.365 Reservation: Not Supported 00:07:52.365 Namespace Sharing Capabilities: Private 00:07:52.365 Size (in LBAs): 1048576 (4GiB) 00:07:52.365 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.365 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.365 Thin Provisioning: Not Supported 00:07:52.365 Per-NS Atomic Units: No 00:07:52.365 Maximum Single Source Range Length: 128 00:07:52.365 Maximum Copy Length: 128 00:07:52.365 Maximum Source Range Count: 128 00:07:52.365 NGUID/EUI64 Never Reused: No 00:07:52.365 Namespace Write Protected: No 00:07:52.365 Number of LBA Formats: 8 00:07:52.365 Current LBA Format: LBA Format #04 00:07:52.365 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.365 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.365 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.365 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.365 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.365 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.365 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.365 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.365 00:07:52.365 NVM Specific Namespace Data 00:07:52.365 =========================== 00:07:52.365 Logical Block Storage Tag Mask: 0 00:07:52.365 Protection Information Capabilities: 00:07:52.365 16b Guard Protection Information Storage Tag Support: No 00:07:52.365 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.365 Storage Tag Check Read Support: No 00:07:52.365 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Namespace ID:2 00:07:52.365 Error Recovery Timeout: Unlimited 00:07:52.365 Command Set Identifier: NVM (00h) 00:07:52.365 Deallocate: Supported 00:07:52.365 Deallocated/Unwritten Error: Supported 00:07:52.365 Deallocated Read Value: All 0x00 00:07:52.365 Deallocate in Write Zeroes: Not Supported 00:07:52.365 Deallocated Guard Field: 0xFFFF 00:07:52.365 Flush: Supported 00:07:52.365 Reservation: Not Supported 00:07:52.365 Namespace Sharing Capabilities: Private 00:07:52.365 Size (in LBAs): 1048576 (4GiB) 00:07:52.365 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.365 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.365 Thin Provisioning: Not Supported 00:07:52.365 Per-NS Atomic Units: No 00:07:52.365 Maximum Single Source Range Length: 128 00:07:52.365 Maximum Copy Length: 128 00:07:52.365 Maximum Source Range Count: 128 00:07:52.365 NGUID/EUI64 Never Reused: No 00:07:52.365 Namespace Write Protected: No 00:07:52.365 Number of LBA Formats: 8 00:07:52.365 Current LBA Format: LBA Format #04 00:07:52.365 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.365 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.365 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.365 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.365 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.365 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.365 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.365 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.365 00:07:52.365 NVM Specific Namespace Data 00:07:52.365 =========================== 00:07:52.365 Logical Block Storage Tag Mask: 0 00:07:52.365 Protection Information Capabilities: 00:07:52.365 16b Guard Protection Information Storage Tag Support: No 00:07:52.365 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.365 Storage Tag Check Read Support: No 00:07:52.365 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Namespace ID:3 00:07:52.365 Error Recovery Timeout: Unlimited 00:07:52.365 Command Set Identifier: NVM (00h) 00:07:52.365 Deallocate: Supported 00:07:52.365 Deallocated/Unwritten Error: Supported 00:07:52.365 Deallocated Read Value: All 0x00 00:07:52.365 Deallocate in Write Zeroes: Not Supported 00:07:52.365 Deallocated Guard Field: 0xFFFF 00:07:52.365 Flush: Supported 00:07:52.365 Reservation: Not Supported 00:07:52.365 Namespace Sharing Capabilities: Private 00:07:52.365 Size (in LBAs): 1048576 (4GiB) 00:07:52.365 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.365 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.365 Thin Provisioning: Not Supported 00:07:52.365 Per-NS Atomic Units: No 00:07:52.365 Maximum Single Source Range Length: 128 00:07:52.365 Maximum Copy Length: 128 00:07:52.365 Maximum Source Range Count: 128 00:07:52.365 NGUID/EUI64 Never Reused: No 00:07:52.365 Namespace Write Protected: No 00:07:52.365 Number of LBA Formats: 8 00:07:52.365 Current LBA Format: LBA Format #04 00:07:52.365 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.365 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.365 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.365 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.365 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.365 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.365 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.365 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.365 00:07:52.365 NVM Specific Namespace Data 00:07:52.365 =========================== 00:07:52.365 Logical Block Storage Tag Mask: 0 00:07:52.365 Protection Information Capabilities: 00:07:52.365 16b Guard Protection Information Storage Tag Support: No 00:07:52.365 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.365 Storage Tag Check Read Support: No 00:07:52.365 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.365 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.365 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:52.627 ===================================================== 00:07:52.627 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.627 ===================================================== 00:07:52.627 Controller Capabilities/Features 00:07:52.627 ================================ 00:07:52.627 Vendor ID: 1b36 00:07:52.627 Subsystem Vendor ID: 1af4 00:07:52.627 Serial Number: 12340 00:07:52.627 Model Number: QEMU NVMe Ctrl 00:07:52.627 Firmware Version: 8.0.0 00:07:52.627 Recommended Arb Burst: 6 00:07:52.627 IEEE OUI Identifier: 00 54 52 00:07:52.627 Multi-path I/O 00:07:52.627 May have multiple subsystem ports: No 00:07:52.627 May have multiple controllers: No 00:07:52.627 Associated with SR-IOV VF: No 00:07:52.627 Max Data Transfer Size: 524288 00:07:52.627 Max Number of Namespaces: 256 00:07:52.627 Max Number of I/O Queues: 64 00:07:52.627 NVMe Specification Version (VS): 1.4 00:07:52.627 NVMe Specification Version (Identify): 1.4 00:07:52.627 Maximum Queue Entries: 2048 00:07:52.627 Contiguous Queues Required: Yes 00:07:52.627 Arbitration Mechanisms Supported 00:07:52.627 Weighted Round Robin: Not Supported 00:07:52.627 Vendor Specific: Not Supported 00:07:52.627 Reset Timeout: 7500 ms 00:07:52.627 Doorbell Stride: 4 bytes 00:07:52.627 NVM Subsystem Reset: Not Supported 00:07:52.627 Command Sets Supported 00:07:52.627 NVM Command Set: Supported 00:07:52.627 Boot Partition: Not Supported 00:07:52.627 Memory Page Size Minimum: 4096 bytes 00:07:52.627 Memory Page Size Maximum: 65536 bytes 00:07:52.627 Persistent Memory Region: Not Supported 00:07:52.627 Optional Asynchronous Events Supported 00:07:52.627 Namespace Attribute Notices: Supported 00:07:52.627 Firmware Activation Notices: Not Supported 00:07:52.627 ANA Change Notices: Not Supported 00:07:52.627 PLE Aggregate Log Change Notices: Not Supported 00:07:52.627 LBA Status Info Alert Notices: Not Supported 00:07:52.627 EGE Aggregate Log Change Notices: Not Supported 00:07:52.627 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.627 Zone Descriptor Change Notices: Not Supported 00:07:52.627 Discovery Log Change Notices: Not Supported 00:07:52.627 Controller Attributes 00:07:52.627 128-bit Host Identifier: Not Supported 00:07:52.627 Non-Operational Permissive Mode: Not Supported 00:07:52.627 NVM Sets: Not Supported 00:07:52.627 Read Recovery Levels: Not Supported 00:07:52.627 Endurance Groups: Not Supported 00:07:52.627 Predictable Latency Mode: Not Supported 00:07:52.627 Traffic Based Keep ALive: Not Supported 00:07:52.627 Namespace Granularity: Not Supported 00:07:52.627 SQ Associations: Not Supported 00:07:52.627 UUID List: Not Supported 00:07:52.627 Multi-Domain Subsystem: Not Supported 00:07:52.627 Fixed Capacity Management: Not Supported 00:07:52.627 Variable Capacity Management: Not Supported 00:07:52.627 Delete Endurance Group: Not Supported 00:07:52.627 Delete NVM Set: Not Supported 00:07:52.627 Extended LBA Formats Supported: Supported 00:07:52.627 Flexible Data Placement Supported: Not Supported 00:07:52.627 00:07:52.627 Controller Memory Buffer Support 00:07:52.627 ================================ 00:07:52.627 Supported: No 00:07:52.627 00:07:52.627 Persistent Memory Region Support 00:07:52.627 ================================ 00:07:52.627 Supported: No 00:07:52.627 00:07:52.627 Admin Command Set Attributes 00:07:52.627 ============================ 00:07:52.627 Security Send/Receive: Not Supported 00:07:52.627 Format NVM: Supported 00:07:52.627 Firmware Activate/Download: Not Supported 00:07:52.627 Namespace Management: Supported 00:07:52.627 Device Self-Test: Not Supported 00:07:52.627 Directives: Supported 00:07:52.627 NVMe-MI: Not Supported 00:07:52.627 Virtualization Management: Not Supported 00:07:52.627 Doorbell Buffer Config: Supported 00:07:52.627 Get LBA Status Capability: Not Supported 00:07:52.627 Command & Feature Lockdown Capability: Not Supported 00:07:52.627 Abort Command Limit: 4 00:07:52.627 Async Event Request Limit: 4 00:07:52.627 Number of Firmware Slots: N/A 00:07:52.627 Firmware Slot 1 Read-Only: N/A 00:07:52.627 Firmware Activation Without Reset: N/A 00:07:52.627 Multiple Update Detection Support: N/A 00:07:52.627 Firmware Update Granularity: No Information Provided 00:07:52.627 Per-Namespace SMART Log: Yes 00:07:52.627 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.627 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:52.627 Command Effects Log Page: Supported 00:07:52.627 Get Log Page Extended Data: Supported 00:07:52.627 Telemetry Log Pages: Not Supported 00:07:52.627 Persistent Event Log Pages: Not Supported 00:07:52.627 Supported Log Pages Log Page: May Support 00:07:52.627 Commands Supported & Effects Log Page: Not Supported 00:07:52.627 Feature Identifiers & Effects Log Page:May Support 00:07:52.627 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.627 Data Area 4 for Telemetry Log: Not Supported 00:07:52.627 Error Log Page Entries Supported: 1 00:07:52.627 Keep Alive: Not Supported 00:07:52.627 00:07:52.627 NVM Command Set Attributes 00:07:52.627 ========================== 00:07:52.627 Submission Queue Entry Size 00:07:52.627 Max: 64 00:07:52.627 Min: 64 00:07:52.627 Completion Queue Entry Size 00:07:52.627 Max: 16 00:07:52.627 Min: 16 00:07:52.627 Number of Namespaces: 256 00:07:52.627 Compare Command: Supported 00:07:52.627 Write Uncorrectable Command: Not Supported 00:07:52.627 Dataset Management Command: Supported 00:07:52.627 Write Zeroes Command: Supported 00:07:52.627 Set Features Save Field: Supported 00:07:52.627 Reservations: Not Supported 00:07:52.627 Timestamp: Supported 00:07:52.627 Copy: Supported 00:07:52.627 Volatile Write Cache: Present 00:07:52.627 Atomic Write Unit (Normal): 1 00:07:52.627 Atomic Write Unit (PFail): 1 00:07:52.627 Atomic Compare & Write Unit: 1 00:07:52.627 Fused Compare & Write: Not Supported 00:07:52.627 Scatter-Gather List 00:07:52.627 SGL Command Set: Supported 00:07:52.627 SGL Keyed: Not Supported 00:07:52.627 SGL Bit Bucket Descriptor: Not Supported 00:07:52.627 SGL Metadata Pointer: Not Supported 00:07:52.627 Oversized SGL: Not Supported 00:07:52.627 SGL Metadata Address: Not Supported 00:07:52.627 SGL Offset: Not Supported 00:07:52.627 Transport SGL Data Block: Not Supported 00:07:52.627 Replay Protected Memory Block: Not Supported 00:07:52.627 00:07:52.627 Firmware Slot Information 00:07:52.627 ========================= 00:07:52.627 Active slot: 1 00:07:52.627 Slot 1 Firmware Revision: 1.0 00:07:52.627 00:07:52.627 00:07:52.627 Commands Supported and Effects 00:07:52.627 ============================== 00:07:52.627 Admin Commands 00:07:52.627 -------------- 00:07:52.627 Delete I/O Submission Queue (00h): Supported 00:07:52.627 Create I/O Submission Queue (01h): Supported 00:07:52.627 Get Log Page (02h): Supported 00:07:52.627 Delete I/O Completion Queue (04h): Supported 00:07:52.627 Create I/O Completion Queue (05h): Supported 00:07:52.627 Identify (06h): Supported 00:07:52.627 Abort (08h): Supported 00:07:52.627 Set Features (09h): Supported 00:07:52.627 Get Features (0Ah): Supported 00:07:52.627 Asynchronous Event Request (0Ch): Supported 00:07:52.627 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.627 Directive Send (19h): Supported 00:07:52.627 Directive Receive (1Ah): Supported 00:07:52.627 Virtualization Management (1Ch): Supported 00:07:52.627 Doorbell Buffer Config (7Ch): Supported 00:07:52.627 Format NVM (80h): Supported LBA-Change 00:07:52.627 I/O Commands 00:07:52.627 ------------ 00:07:52.627 Flush (00h): Supported LBA-Change 00:07:52.627 Write (01h): Supported LBA-Change 00:07:52.627 Read (02h): Supported 00:07:52.627 Compare (05h): Supported 00:07:52.627 Write Zeroes (08h): Supported LBA-Change 00:07:52.627 Dataset Management (09h): Supported LBA-Change 00:07:52.627 Unknown (0Ch): Supported 00:07:52.627 Unknown (12h): Supported 00:07:52.627 Copy (19h): Supported LBA-Change 00:07:52.627 Unknown (1Dh): Supported LBA-Change 00:07:52.627 00:07:52.627 Error Log 00:07:52.627 ========= 00:07:52.627 00:07:52.627 Arbitration 00:07:52.627 =========== 00:07:52.627 Arbitration Burst: no limit 00:07:52.627 00:07:52.627 Power Management 00:07:52.627 ================ 00:07:52.627 Number of Power States: 1 00:07:52.627 Current Power State: Power State #0 00:07:52.627 Power State #0: 00:07:52.627 Max Power: 25.00 W 00:07:52.627 Non-Operational State: Operational 00:07:52.627 Entry Latency: 16 microseconds 00:07:52.627 Exit Latency: 4 microseconds 00:07:52.627 Relative Read Throughput: 0 00:07:52.627 Relative Read Latency: 0 00:07:52.628 Relative Write Throughput: 0 00:07:52.628 Relative Write Latency: 0 00:07:52.628 Idle Power: Not Reported 00:07:52.628 Active Power: Not Reported 00:07:52.628 Non-Operational Permissive Mode: Not Supported 00:07:52.628 00:07:52.628 Health Information 00:07:52.628 ================== 00:07:52.628 Critical Warnings: 00:07:52.628 Available Spare Space: OK 00:07:52.628 Temperature: OK 00:07:52.628 Device Reliability: OK 00:07:52.628 Read Only: No 00:07:52.628 Volatile Memory Backup: OK 00:07:52.628 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.628 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.628 Available Spare: 0% 00:07:52.628 Available Spare Threshold: 0% 00:07:52.628 Life Percentage Used: 0% 00:07:52.628 Data Units Read: 670 00:07:52.628 Data Units Written: 599 00:07:52.628 Host Read Commands: 35677 00:07:52.628 Host Write Commands: 35463 00:07:52.628 Controller Busy Time: 0 minutes 00:07:52.628 Power Cycles: 0 00:07:52.628 Power On Hours: 0 hours 00:07:52.628 Unsafe Shutdowns: 0 00:07:52.628 Unrecoverable Media Errors: 0 00:07:52.628 Lifetime Error Log Entries: 0 00:07:52.628 Warning Temperature Time: 0 minutes 00:07:52.628 Critical Temperature Time: 0 minutes 00:07:52.628 00:07:52.628 Number of Queues 00:07:52.628 ================ 00:07:52.628 Number of I/O Submission Queues: 64 00:07:52.628 Number of I/O Completion Queues: 64 00:07:52.628 00:07:52.628 ZNS Specific Controller Data 00:07:52.628 ============================ 00:07:52.628 Zone Append Size Limit: 0 00:07:52.628 00:07:52.628 00:07:52.628 Active Namespaces 00:07:52.628 ================= 00:07:52.628 Namespace ID:1 00:07:52.628 Error Recovery Timeout: Unlimited 00:07:52.628 Command Set Identifier: NVM (00h) 00:07:52.628 Deallocate: Supported 00:07:52.628 Deallocated/Unwritten Error: Supported 00:07:52.628 Deallocated Read Value: All 0x00 00:07:52.628 Deallocate in Write Zeroes: Not Supported 00:07:52.628 Deallocated Guard Field: 0xFFFF 00:07:52.628 Flush: Supported 00:07:52.628 Reservation: Not Supported 00:07:52.628 Metadata Transferred as: Separate Metadata Buffer 00:07:52.628 Namespace Sharing Capabilities: Private 00:07:52.628 Size (in LBAs): 1548666 (5GiB) 00:07:52.628 Capacity (in LBAs): 1548666 (5GiB) 00:07:52.628 Utilization (in LBAs): 1548666 (5GiB) 00:07:52.628 Thin Provisioning: Not Supported 00:07:52.628 Per-NS Atomic Units: No 00:07:52.628 Maximum Single Source Range Length: 128 00:07:52.628 Maximum Copy Length: 128 00:07:52.628 Maximum Source Range Count: 128 00:07:52.628 NGUID/EUI64 Never Reused: No 00:07:52.628 Namespace Write Protected: No 00:07:52.628 Number of LBA Formats: 8 00:07:52.628 Current LBA Format: LBA Format #07 00:07:52.628 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.628 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.628 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.628 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.628 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.628 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.628 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.628 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.628 00:07:52.628 NVM Specific Namespace Data 00:07:52.628 =========================== 00:07:52.628 Logical Block Storage Tag Mask: 0 00:07:52.628 Protection Information Capabilities: 00:07:52.628 16b Guard Protection Information Storage Tag Support: No 00:07:52.628 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.628 Storage Tag Check Read Support: No 00:07:52.628 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.628 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.628 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.628 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.628 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.628 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.628 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.628 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.628 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.628 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:52.628 ===================================================== 00:07:52.628 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.628 ===================================================== 00:07:52.628 Controller Capabilities/Features 00:07:52.628 ================================ 00:07:52.628 Vendor ID: 1b36 00:07:52.628 Subsystem Vendor ID: 1af4 00:07:52.628 Serial Number: 12341 00:07:52.628 Model Number: QEMU NVMe Ctrl 00:07:52.628 Firmware Version: 8.0.0 00:07:52.628 Recommended Arb Burst: 6 00:07:52.628 IEEE OUI Identifier: 00 54 52 00:07:52.628 Multi-path I/O 00:07:52.628 May have multiple subsystem ports: No 00:07:52.628 May have multiple controllers: No 00:07:52.628 Associated with SR-IOV VF: No 00:07:52.628 Max Data Transfer Size: 524288 00:07:52.628 Max Number of Namespaces: 256 00:07:52.628 Max Number of I/O Queues: 64 00:07:52.628 NVMe Specification Version (VS): 1.4 00:07:52.628 NVMe Specification Version (Identify): 1.4 00:07:52.628 Maximum Queue Entries: 2048 00:07:52.628 Contiguous Queues Required: Yes 00:07:52.628 Arbitration Mechanisms Supported 00:07:52.628 Weighted Round Robin: Not Supported 00:07:52.628 Vendor Specific: Not Supported 00:07:52.628 Reset Timeout: 7500 ms 00:07:52.628 Doorbell Stride: 4 bytes 00:07:52.628 NVM Subsystem Reset: Not Supported 00:07:52.628 Command Sets Supported 00:07:52.628 NVM Command Set: Supported 00:07:52.628 Boot Partition: Not Supported 00:07:52.628 Memory Page Size Minimum: 4096 bytes 00:07:52.628 Memory Page Size Maximum: 65536 bytes 00:07:52.628 Persistent Memory Region: Not Supported 00:07:52.628 Optional Asynchronous Events Supported 00:07:52.628 Namespace Attribute Notices: Supported 00:07:52.628 Firmware Activation Notices: Not Supported 00:07:52.628 ANA Change Notices: Not Supported 00:07:52.628 PLE Aggregate Log Change Notices: Not Supported 00:07:52.628 LBA Status Info Alert Notices: Not Supported 00:07:52.628 EGE Aggregate Log Change Notices: Not Supported 00:07:52.628 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.628 Zone Descriptor Change Notices: Not Supported 00:07:52.628 Discovery Log Change Notices: Not Supported 00:07:52.628 Controller Attributes 00:07:52.628 128-bit Host Identifier: Not Supported 00:07:52.628 Non-Operational Permissive Mode: Not Supported 00:07:52.628 NVM Sets: Not Supported 00:07:52.628 Read Recovery Levels: Not Supported 00:07:52.628 Endurance Groups: Not Supported 00:07:52.628 Predictable Latency Mode: Not Supported 00:07:52.628 Traffic Based Keep ALive: Not Supported 00:07:52.628 Namespace Granularity: Not Supported 00:07:52.628 SQ Associations: Not Supported 00:07:52.628 UUID List: Not Supported 00:07:52.628 Multi-Domain Subsystem: Not Supported 00:07:52.628 Fixed Capacity Management: Not Supported 00:07:52.628 Variable Capacity Management: Not Supported 00:07:52.628 Delete Endurance Group: Not Supported 00:07:52.628 Delete NVM Set: Not Supported 00:07:52.628 Extended LBA Formats Supported: Supported 00:07:52.628 Flexible Data Placement Supported: Not Supported 00:07:52.628 00:07:52.628 Controller Memory Buffer Support 00:07:52.628 ================================ 00:07:52.628 Supported: No 00:07:52.628 00:07:52.628 Persistent Memory Region Support 00:07:52.628 ================================ 00:07:52.628 Supported: No 00:07:52.628 00:07:52.628 Admin Command Set Attributes 00:07:52.628 ============================ 00:07:52.628 Security Send/Receive: Not Supported 00:07:52.628 Format NVM: Supported 00:07:52.628 Firmware Activate/Download: Not Supported 00:07:52.628 Namespace Management: Supported 00:07:52.628 Device Self-Test: Not Supported 00:07:52.628 Directives: Supported 00:07:52.628 NVMe-MI: Not Supported 00:07:52.628 Virtualization Management: Not Supported 00:07:52.628 Doorbell Buffer Config: Supported 00:07:52.628 Get LBA Status Capability: Not Supported 00:07:52.628 Command & Feature Lockdown Capability: Not Supported 00:07:52.628 Abort Command Limit: 4 00:07:52.628 Async Event Request Limit: 4 00:07:52.628 Number of Firmware Slots: N/A 00:07:52.628 Firmware Slot 1 Read-Only: N/A 00:07:52.628 Firmware Activation Without Reset: N/A 00:07:52.629 Multiple Update Detection Support: N/A 00:07:52.629 Firmware Update Granularity: No Information Provided 00:07:52.629 Per-Namespace SMART Log: Yes 00:07:52.629 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.629 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:52.629 Command Effects Log Page: Supported 00:07:52.629 Get Log Page Extended Data: Supported 00:07:52.629 Telemetry Log Pages: Not Supported 00:07:52.629 Persistent Event Log Pages: Not Supported 00:07:52.629 Supported Log Pages Log Page: May Support 00:07:52.629 Commands Supported & Effects Log Page: Not Supported 00:07:52.629 Feature Identifiers & Effects Log Page:May Support 00:07:52.629 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.629 Data Area 4 for Telemetry Log: Not Supported 00:07:52.629 Error Log Page Entries Supported: 1 00:07:52.629 Keep Alive: Not Supported 00:07:52.629 00:07:52.629 NVM Command Set Attributes 00:07:52.629 ========================== 00:07:52.629 Submission Queue Entry Size 00:07:52.629 Max: 64 00:07:52.629 Min: 64 00:07:52.629 Completion Queue Entry Size 00:07:52.629 Max: 16 00:07:52.629 Min: 16 00:07:52.629 Number of Namespaces: 256 00:07:52.629 Compare Command: Supported 00:07:52.629 Write Uncorrectable Command: Not Supported 00:07:52.629 Dataset Management Command: Supported 00:07:52.629 Write Zeroes Command: Supported 00:07:52.629 Set Features Save Field: Supported 00:07:52.629 Reservations: Not Supported 00:07:52.629 Timestamp: Supported 00:07:52.629 Copy: Supported 00:07:52.629 Volatile Write Cache: Present 00:07:52.629 Atomic Write Unit (Normal): 1 00:07:52.629 Atomic Write Unit (PFail): 1 00:07:52.629 Atomic Compare & Write Unit: 1 00:07:52.629 Fused Compare & Write: Not Supported 00:07:52.629 Scatter-Gather List 00:07:52.629 SGL Command Set: Supported 00:07:52.629 SGL Keyed: Not Supported 00:07:52.629 SGL Bit Bucket Descriptor: Not Supported 00:07:52.629 SGL Metadata Pointer: Not Supported 00:07:52.629 Oversized SGL: Not Supported 00:07:52.629 SGL Metadata Address: Not Supported 00:07:52.629 SGL Offset: Not Supported 00:07:52.629 Transport SGL Data Block: Not Supported 00:07:52.629 Replay Protected Memory Block: Not Supported 00:07:52.629 00:07:52.629 Firmware Slot Information 00:07:52.629 ========================= 00:07:52.629 Active slot: 1 00:07:52.629 Slot 1 Firmware Revision: 1.0 00:07:52.629 00:07:52.629 00:07:52.629 Commands Supported and Effects 00:07:52.629 ============================== 00:07:52.629 Admin Commands 00:07:52.629 -------------- 00:07:52.629 Delete I/O Submission Queue (00h): Supported 00:07:52.629 Create I/O Submission Queue (01h): Supported 00:07:52.629 Get Log Page (02h): Supported 00:07:52.629 Delete I/O Completion Queue (04h): Supported 00:07:52.629 Create I/O Completion Queue (05h): Supported 00:07:52.629 Identify (06h): Supported 00:07:52.629 Abort (08h): Supported 00:07:52.629 Set Features (09h): Supported 00:07:52.629 Get Features (0Ah): Supported 00:07:52.629 Asynchronous Event Request (0Ch): Supported 00:07:52.629 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.629 Directive Send (19h): Supported 00:07:52.629 Directive Receive (1Ah): Supported 00:07:52.629 Virtualization Management (1Ch): Supported 00:07:52.629 Doorbell Buffer Config (7Ch): Supported 00:07:52.629 Format NVM (80h): Supported LBA-Change 00:07:52.629 I/O Commands 00:07:52.629 ------------ 00:07:52.629 Flush (00h): Supported LBA-Change 00:07:52.629 Write (01h): Supported LBA-Change 00:07:52.629 Read (02h): Supported 00:07:52.629 Compare (05h): Supported 00:07:52.629 Write Zeroes (08h): Supported LBA-Change 00:07:52.629 Dataset Management (09h): Supported LBA-Change 00:07:52.629 Unknown (0Ch): Supported 00:07:52.629 Unknown (12h): Supported 00:07:52.629 Copy (19h): Supported LBA-Change 00:07:52.629 Unknown (1Dh): Supported LBA-Change 00:07:52.629 00:07:52.629 Error Log 00:07:52.629 ========= 00:07:52.629 00:07:52.629 Arbitration 00:07:52.629 =========== 00:07:52.629 Arbitration Burst: no limit 00:07:52.629 00:07:52.629 Power Management 00:07:52.629 ================ 00:07:52.629 Number of Power States: 1 00:07:52.629 Current Power State: Power State #0 00:07:52.629 Power State #0: 00:07:52.629 Max Power: 25.00 W 00:07:52.629 Non-Operational State: Operational 00:07:52.629 Entry Latency: 16 microseconds 00:07:52.629 Exit Latency: 4 microseconds 00:07:52.629 Relative Read Throughput: 0 00:07:52.629 Relative Read Latency: 0 00:07:52.629 Relative Write Throughput: 0 00:07:52.629 Relative Write Latency: 0 00:07:52.629 Idle Power: Not Reported 00:07:52.629 Active Power: Not Reported 00:07:52.629 Non-Operational Permissive Mode: Not Supported 00:07:52.629 00:07:52.629 Health Information 00:07:52.629 ================== 00:07:52.629 Critical Warnings: 00:07:52.629 Available Spare Space: OK 00:07:52.629 Temperature: OK 00:07:52.629 Device Reliability: OK 00:07:52.629 Read Only: No 00:07:52.629 Volatile Memory Backup: OK 00:07:52.629 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.629 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.629 Available Spare: 0% 00:07:52.629 Available Spare Threshold: 0% 00:07:52.629 Life Percentage Used: 0% 00:07:52.629 Data Units Read: 1021 00:07:52.629 Data Units Written: 880 00:07:52.629 Host Read Commands: 52599 00:07:52.629 Host Write Commands: 51287 00:07:52.629 Controller Busy Time: 0 minutes 00:07:52.629 Power Cycles: 0 00:07:52.629 Power On Hours: 0 hours 00:07:52.629 Unsafe Shutdowns: 0 00:07:52.629 Unrecoverable Media Errors: 0 00:07:52.629 Lifetime Error Log Entries: 0 00:07:52.629 Warning Temperature Time: 0 minutes 00:07:52.629 Critical Temperature Time: 0 minutes 00:07:52.629 00:07:52.629 Number of Queues 00:07:52.629 ================ 00:07:52.629 Number of I/O Submission Queues: 64 00:07:52.629 Number of I/O Completion Queues: 64 00:07:52.629 00:07:52.629 ZNS Specific Controller Data 00:07:52.629 ============================ 00:07:52.629 Zone Append Size Limit: 0 00:07:52.629 00:07:52.629 00:07:52.629 Active Namespaces 00:07:52.629 ================= 00:07:52.629 Namespace ID:1 00:07:52.629 Error Recovery Timeout: Unlimited 00:07:52.629 Command Set Identifier: NVM (00h) 00:07:52.629 Deallocate: Supported 00:07:52.629 Deallocated/Unwritten Error: Supported 00:07:52.629 Deallocated Read Value: All 0x00 00:07:52.629 Deallocate in Write Zeroes: Not Supported 00:07:52.629 Deallocated Guard Field: 0xFFFF 00:07:52.629 Flush: Supported 00:07:52.629 Reservation: Not Supported 00:07:52.629 Namespace Sharing Capabilities: Private 00:07:52.629 Size (in LBAs): 1310720 (5GiB) 00:07:52.629 Capacity (in LBAs): 1310720 (5GiB) 00:07:52.629 Utilization (in LBAs): 1310720 (5GiB) 00:07:52.629 Thin Provisioning: Not Supported 00:07:52.629 Per-NS Atomic Units: No 00:07:52.629 Maximum Single Source Range Length: 128 00:07:52.629 Maximum Copy Length: 128 00:07:52.629 Maximum Source Range Count: 128 00:07:52.629 NGUID/EUI64 Never Reused: No 00:07:52.629 Namespace Write Protected: No 00:07:52.629 Number of LBA Formats: 8 00:07:52.629 Current LBA Format: LBA Format #04 00:07:52.629 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.629 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.629 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.629 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.629 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.629 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.629 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.629 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.629 00:07:52.629 NVM Specific Namespace Data 00:07:52.629 =========================== 00:07:52.629 Logical Block Storage Tag Mask: 0 00:07:52.629 Protection Information Capabilities: 00:07:52.629 16b Guard Protection Information Storage Tag Support: No 00:07:52.629 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.629 Storage Tag Check Read Support: No 00:07:52.629 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.629 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.629 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.629 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.629 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.629 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.629 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.629 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.629 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.629 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:52.891 ===================================================== 00:07:52.891 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.891 ===================================================== 00:07:52.891 Controller Capabilities/Features 00:07:52.891 ================================ 00:07:52.891 Vendor ID: 1b36 00:07:52.891 Subsystem Vendor ID: 1af4 00:07:52.891 Serial Number: 12342 00:07:52.891 Model Number: QEMU NVMe Ctrl 00:07:52.891 Firmware Version: 8.0.0 00:07:52.891 Recommended Arb Burst: 6 00:07:52.891 IEEE OUI Identifier: 00 54 52 00:07:52.891 Multi-path I/O 00:07:52.891 May have multiple subsystem ports: No 00:07:52.891 May have multiple controllers: No 00:07:52.891 Associated with SR-IOV VF: No 00:07:52.891 Max Data Transfer Size: 524288 00:07:52.891 Max Number of Namespaces: 256 00:07:52.891 Max Number of I/O Queues: 64 00:07:52.891 NVMe Specification Version (VS): 1.4 00:07:52.891 NVMe Specification Version (Identify): 1.4 00:07:52.891 Maximum Queue Entries: 2048 00:07:52.891 Contiguous Queues Required: Yes 00:07:52.891 Arbitration Mechanisms Supported 00:07:52.891 Weighted Round Robin: Not Supported 00:07:52.891 Vendor Specific: Not Supported 00:07:52.891 Reset Timeout: 7500 ms 00:07:52.891 Doorbell Stride: 4 bytes 00:07:52.891 NVM Subsystem Reset: Not Supported 00:07:52.891 Command Sets Supported 00:07:52.892 NVM Command Set: Supported 00:07:52.892 Boot Partition: Not Supported 00:07:52.892 Memory Page Size Minimum: 4096 bytes 00:07:52.892 Memory Page Size Maximum: 65536 bytes 00:07:52.892 Persistent Memory Region: Not Supported 00:07:52.892 Optional Asynchronous Events Supported 00:07:52.892 Namespace Attribute Notices: Supported 00:07:52.892 Firmware Activation Notices: Not Supported 00:07:52.892 ANA Change Notices: Not Supported 00:07:52.892 PLE Aggregate Log Change Notices: Not Supported 00:07:52.892 LBA Status Info Alert Notices: Not Supported 00:07:52.892 EGE Aggregate Log Change Notices: Not Supported 00:07:52.892 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.892 Zone Descriptor Change Notices: Not Supported 00:07:52.892 Discovery Log Change Notices: Not Supported 00:07:52.892 Controller Attributes 00:07:52.892 128-bit Host Identifier: Not Supported 00:07:52.892 Non-Operational Permissive Mode: Not Supported 00:07:52.892 NVM Sets: Not Supported 00:07:52.892 Read Recovery Levels: Not Supported 00:07:52.892 Endurance Groups: Not Supported 00:07:52.892 Predictable Latency Mode: Not Supported 00:07:52.892 Traffic Based Keep ALive: Not Supported 00:07:52.892 Namespace Granularity: Not Supported 00:07:52.892 SQ Associations: Not Supported 00:07:52.892 UUID List: Not Supported 00:07:52.892 Multi-Domain Subsystem: Not Supported 00:07:52.892 Fixed Capacity Management: Not Supported 00:07:52.892 Variable Capacity Management: Not Supported 00:07:52.892 Delete Endurance Group: Not Supported 00:07:52.892 Delete NVM Set: Not Supported 00:07:52.892 Extended LBA Formats Supported: Supported 00:07:52.892 Flexible Data Placement Supported: Not Supported 00:07:52.892 00:07:52.892 Controller Memory Buffer Support 00:07:52.892 ================================ 00:07:52.892 Supported: No 00:07:52.892 00:07:52.892 Persistent Memory Region Support 00:07:52.892 ================================ 00:07:52.892 Supported: No 00:07:52.892 00:07:52.892 Admin Command Set Attributes 00:07:52.892 ============================ 00:07:52.892 Security Send/Receive: Not Supported 00:07:52.892 Format NVM: Supported 00:07:52.892 Firmware Activate/Download: Not Supported 00:07:52.892 Namespace Management: Supported 00:07:52.892 Device Self-Test: Not Supported 00:07:52.892 Directives: Supported 00:07:52.892 NVMe-MI: Not Supported 00:07:52.892 Virtualization Management: Not Supported 00:07:52.892 Doorbell Buffer Config: Supported 00:07:52.892 Get LBA Status Capability: Not Supported 00:07:52.892 Command & Feature Lockdown Capability: Not Supported 00:07:52.892 Abort Command Limit: 4 00:07:52.892 Async Event Request Limit: 4 00:07:52.892 Number of Firmware Slots: N/A 00:07:52.892 Firmware Slot 1 Read-Only: N/A 00:07:52.892 Firmware Activation Without Reset: N/A 00:07:52.892 Multiple Update Detection Support: N/A 00:07:52.892 Firmware Update Granularity: No Information Provided 00:07:52.892 Per-Namespace SMART Log: Yes 00:07:52.892 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.892 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:52.892 Command Effects Log Page: Supported 00:07:52.892 Get Log Page Extended Data: Supported 00:07:52.892 Telemetry Log Pages: Not Supported 00:07:52.892 Persistent Event Log Pages: Not Supported 00:07:52.892 Supported Log Pages Log Page: May Support 00:07:52.892 Commands Supported & Effects Log Page: Not Supported 00:07:52.892 Feature Identifiers & Effects Log Page:May Support 00:07:52.892 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.892 Data Area 4 for Telemetry Log: Not Supported 00:07:52.892 Error Log Page Entries Supported: 1 00:07:52.892 Keep Alive: Not Supported 00:07:52.892 00:07:52.892 NVM Command Set Attributes 00:07:52.892 ========================== 00:07:52.892 Submission Queue Entry Size 00:07:52.892 Max: 64 00:07:52.892 Min: 64 00:07:52.892 Completion Queue Entry Size 00:07:52.892 Max: 16 00:07:52.892 Min: 16 00:07:52.892 Number of Namespaces: 256 00:07:52.892 Compare Command: Supported 00:07:52.892 Write Uncorrectable Command: Not Supported 00:07:52.892 Dataset Management Command: Supported 00:07:52.892 Write Zeroes Command: Supported 00:07:52.892 Set Features Save Field: Supported 00:07:52.892 Reservations: Not Supported 00:07:52.892 Timestamp: Supported 00:07:52.892 Copy: Supported 00:07:52.892 Volatile Write Cache: Present 00:07:52.892 Atomic Write Unit (Normal): 1 00:07:52.892 Atomic Write Unit (PFail): 1 00:07:52.892 Atomic Compare & Write Unit: 1 00:07:52.892 Fused Compare & Write: Not Supported 00:07:52.892 Scatter-Gather List 00:07:52.892 SGL Command Set: Supported 00:07:52.892 SGL Keyed: Not Supported 00:07:52.892 SGL Bit Bucket Descriptor: Not Supported 00:07:52.892 SGL Metadata Pointer: Not Supported 00:07:52.892 Oversized SGL: Not Supported 00:07:52.892 SGL Metadata Address: Not Supported 00:07:52.892 SGL Offset: Not Supported 00:07:52.892 Transport SGL Data Block: Not Supported 00:07:52.892 Replay Protected Memory Block: Not Supported 00:07:52.892 00:07:52.892 Firmware Slot Information 00:07:52.892 ========================= 00:07:52.892 Active slot: 1 00:07:52.892 Slot 1 Firmware Revision: 1.0 00:07:52.892 00:07:52.892 00:07:52.892 Commands Supported and Effects 00:07:52.892 ============================== 00:07:52.892 Admin Commands 00:07:52.892 -------------- 00:07:52.892 Delete I/O Submission Queue (00h): Supported 00:07:52.892 Create I/O Submission Queue (01h): Supported 00:07:52.892 Get Log Page (02h): Supported 00:07:52.892 Delete I/O Completion Queue (04h): Supported 00:07:52.892 Create I/O Completion Queue (05h): Supported 00:07:52.892 Identify (06h): Supported 00:07:52.892 Abort (08h): Supported 00:07:52.892 Set Features (09h): Supported 00:07:52.892 Get Features (0Ah): Supported 00:07:52.892 Asynchronous Event Request (0Ch): Supported 00:07:52.892 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.892 Directive Send (19h): Supported 00:07:52.892 Directive Receive (1Ah): Supported 00:07:52.892 Virtualization Management (1Ch): Supported 00:07:52.892 Doorbell Buffer Config (7Ch): Supported 00:07:52.892 Format NVM (80h): Supported LBA-Change 00:07:52.892 I/O Commands 00:07:52.892 ------------ 00:07:52.892 Flush (00h): Supported LBA-Change 00:07:52.892 Write (01h): Supported LBA-Change 00:07:52.892 Read (02h): Supported 00:07:52.892 Compare (05h): Supported 00:07:52.892 Write Zeroes (08h): Supported LBA-Change 00:07:52.892 Dataset Management (09h): Supported LBA-Change 00:07:52.892 Unknown (0Ch): Supported 00:07:52.892 Unknown (12h): Supported 00:07:52.892 Copy (19h): Supported LBA-Change 00:07:52.892 Unknown (1Dh): Supported LBA-Change 00:07:52.892 00:07:52.892 Error Log 00:07:52.892 ========= 00:07:52.892 00:07:52.892 Arbitration 00:07:52.892 =========== 00:07:52.892 Arbitration Burst: no limit 00:07:52.892 00:07:52.892 Power Management 00:07:52.892 ================ 00:07:52.892 Number of Power States: 1 00:07:52.892 Current Power State: Power State #0 00:07:52.892 Power State #0: 00:07:52.892 Max Power: 25.00 W 00:07:52.892 Non-Operational State: Operational 00:07:52.892 Entry Latency: 16 microseconds 00:07:52.892 Exit Latency: 4 microseconds 00:07:52.892 Relative Read Throughput: 0 00:07:52.892 Relative Read Latency: 0 00:07:52.892 Relative Write Throughput: 0 00:07:52.892 Relative Write Latency: 0 00:07:52.892 Idle Power: Not Reported 00:07:52.892 Active Power: Not Reported 00:07:52.892 Non-Operational Permissive Mode: Not Supported 00:07:52.892 00:07:52.892 Health Information 00:07:52.892 ================== 00:07:52.892 Critical Warnings: 00:07:52.892 Available Spare Space: OK 00:07:52.892 Temperature: OK 00:07:52.892 Device Reliability: OK 00:07:52.892 Read Only: No 00:07:52.892 Volatile Memory Backup: OK 00:07:52.892 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.892 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.892 Available Spare: 0% 00:07:52.892 Available Spare Threshold: 0% 00:07:52.892 Life Percentage Used: 0% 00:07:52.892 Data Units Read: 2174 00:07:52.892 Data Units Written: 1961 00:07:52.892 Host Read Commands: 109201 00:07:52.892 Host Write Commands: 107471 00:07:52.892 Controller Busy Time: 0 minutes 00:07:52.892 Power Cycles: 0 00:07:52.892 Power On Hours: 0 hours 00:07:52.892 Unsafe Shutdowns: 0 00:07:52.892 Unrecoverable Media Errors: 0 00:07:52.892 Lifetime Error Log Entries: 0 00:07:52.892 Warning Temperature Time: 0 minutes 00:07:52.893 Critical Temperature Time: 0 minutes 00:07:52.893 00:07:52.893 Number of Queues 00:07:52.893 ================ 00:07:52.893 Number of I/O Submission Queues: 64 00:07:52.893 Number of I/O Completion Queues: 64 00:07:52.893 00:07:52.893 ZNS Specific Controller Data 00:07:52.893 ============================ 00:07:52.893 Zone Append Size Limit: 0 00:07:52.893 00:07:52.893 00:07:52.893 Active Namespaces 00:07:52.893 ================= 00:07:52.893 Namespace ID:1 00:07:52.893 Error Recovery Timeout: Unlimited 00:07:52.893 Command Set Identifier: NVM (00h) 00:07:52.893 Deallocate: Supported 00:07:52.893 Deallocated/Unwritten Error: Supported 00:07:52.893 Deallocated Read Value: All 0x00 00:07:52.893 Deallocate in Write Zeroes: Not Supported 00:07:52.893 Deallocated Guard Field: 0xFFFF 00:07:52.893 Flush: Supported 00:07:52.893 Reservation: Not Supported 00:07:52.893 Namespace Sharing Capabilities: Private 00:07:52.893 Size (in LBAs): 1048576 (4GiB) 00:07:52.893 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.893 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.893 Thin Provisioning: Not Supported 00:07:52.893 Per-NS Atomic Units: No 00:07:52.893 Maximum Single Source Range Length: 128 00:07:52.893 Maximum Copy Length: 128 00:07:52.893 Maximum Source Range Count: 128 00:07:52.893 NGUID/EUI64 Never Reused: No 00:07:52.893 Namespace Write Protected: No 00:07:52.893 Number of LBA Formats: 8 00:07:52.893 Current LBA Format: LBA Format #04 00:07:52.893 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.893 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.893 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.893 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.893 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.893 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.893 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.893 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.893 00:07:52.893 NVM Specific Namespace Data 00:07:52.893 =========================== 00:07:52.893 Logical Block Storage Tag Mask: 0 00:07:52.893 Protection Information Capabilities: 00:07:52.893 16b Guard Protection Information Storage Tag Support: No 00:07:52.893 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.893 Storage Tag Check Read Support: No 00:07:52.893 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Namespace ID:2 00:07:52.893 Error Recovery Timeout: Unlimited 00:07:52.893 Command Set Identifier: NVM (00h) 00:07:52.893 Deallocate: Supported 00:07:52.893 Deallocated/Unwritten Error: Supported 00:07:52.893 Deallocated Read Value: All 0x00 00:07:52.893 Deallocate in Write Zeroes: Not Supported 00:07:52.893 Deallocated Guard Field: 0xFFFF 00:07:52.893 Flush: Supported 00:07:52.893 Reservation: Not Supported 00:07:52.893 Namespace Sharing Capabilities: Private 00:07:52.893 Size (in LBAs): 1048576 (4GiB) 00:07:52.893 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.893 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.893 Thin Provisioning: Not Supported 00:07:52.893 Per-NS Atomic Units: No 00:07:52.893 Maximum Single Source Range Length: 128 00:07:52.893 Maximum Copy Length: 128 00:07:52.893 Maximum Source Range Count: 128 00:07:52.893 NGUID/EUI64 Never Reused: No 00:07:52.893 Namespace Write Protected: No 00:07:52.893 Number of LBA Formats: 8 00:07:52.893 Current LBA Format: LBA Format #04 00:07:52.893 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.893 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.893 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.893 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.893 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.893 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.893 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.893 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.893 00:07:52.893 NVM Specific Namespace Data 00:07:52.893 =========================== 00:07:52.893 Logical Block Storage Tag Mask: 0 00:07:52.893 Protection Information Capabilities: 00:07:52.893 16b Guard Protection Information Storage Tag Support: No 00:07:52.893 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.893 Storage Tag Check Read Support: No 00:07:52.893 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Namespace ID:3 00:07:52.893 Error Recovery Timeout: Unlimited 00:07:52.893 Command Set Identifier: NVM (00h) 00:07:52.893 Deallocate: Supported 00:07:52.893 Deallocated/Unwritten Error: Supported 00:07:52.893 Deallocated Read Value: All 0x00 00:07:52.893 Deallocate in Write Zeroes: Not Supported 00:07:52.893 Deallocated Guard Field: 0xFFFF 00:07:52.893 Flush: Supported 00:07:52.893 Reservation: Not Supported 00:07:52.893 Namespace Sharing Capabilities: Private 00:07:52.893 Size (in LBAs): 1048576 (4GiB) 00:07:52.893 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.893 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.893 Thin Provisioning: Not Supported 00:07:52.893 Per-NS Atomic Units: No 00:07:52.893 Maximum Single Source Range Length: 128 00:07:52.893 Maximum Copy Length: 128 00:07:52.893 Maximum Source Range Count: 128 00:07:52.893 NGUID/EUI64 Never Reused: No 00:07:52.893 Namespace Write Protected: No 00:07:52.893 Number of LBA Formats: 8 00:07:52.893 Current LBA Format: LBA Format #04 00:07:52.893 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.893 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.893 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.893 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.893 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.893 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.893 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.893 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.893 00:07:52.893 NVM Specific Namespace Data 00:07:52.893 =========================== 00:07:52.893 Logical Block Storage Tag Mask: 0 00:07:52.893 Protection Information Capabilities: 00:07:52.893 16b Guard Protection Information Storage Tag Support: No 00:07:52.893 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.893 Storage Tag Check Read Support: No 00:07:52.893 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.893 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.893 10:39:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:53.155 ===================================================== 00:07:53.155 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:53.155 ===================================================== 00:07:53.155 Controller Capabilities/Features 00:07:53.155 ================================ 00:07:53.155 Vendor ID: 1b36 00:07:53.155 Subsystem Vendor ID: 1af4 00:07:53.155 Serial Number: 12343 00:07:53.155 Model Number: QEMU NVMe Ctrl 00:07:53.155 Firmware Version: 8.0.0 00:07:53.155 Recommended Arb Burst: 6 00:07:53.155 IEEE OUI Identifier: 00 54 52 00:07:53.155 Multi-path I/O 00:07:53.155 May have multiple subsystem ports: No 00:07:53.155 May have multiple controllers: Yes 00:07:53.155 Associated with SR-IOV VF: No 00:07:53.155 Max Data Transfer Size: 524288 00:07:53.155 Max Number of Namespaces: 256 00:07:53.155 Max Number of I/O Queues: 64 00:07:53.155 NVMe Specification Version (VS): 1.4 00:07:53.155 NVMe Specification Version (Identify): 1.4 00:07:53.155 Maximum Queue Entries: 2048 00:07:53.155 Contiguous Queues Required: Yes 00:07:53.155 Arbitration Mechanisms Supported 00:07:53.155 Weighted Round Robin: Not Supported 00:07:53.155 Vendor Specific: Not Supported 00:07:53.155 Reset Timeout: 7500 ms 00:07:53.155 Doorbell Stride: 4 bytes 00:07:53.155 NVM Subsystem Reset: Not Supported 00:07:53.155 Command Sets Supported 00:07:53.155 NVM Command Set: Supported 00:07:53.155 Boot Partition: Not Supported 00:07:53.155 Memory Page Size Minimum: 4096 bytes 00:07:53.155 Memory Page Size Maximum: 65536 bytes 00:07:53.155 Persistent Memory Region: Not Supported 00:07:53.155 Optional Asynchronous Events Supported 00:07:53.155 Namespace Attribute Notices: Supported 00:07:53.155 Firmware Activation Notices: Not Supported 00:07:53.155 ANA Change Notices: Not Supported 00:07:53.155 PLE Aggregate Log Change Notices: Not Supported 00:07:53.155 LBA Status Info Alert Notices: Not Supported 00:07:53.155 EGE Aggregate Log Change Notices: Not Supported 00:07:53.155 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.155 Zone Descriptor Change Notices: Not Supported 00:07:53.155 Discovery Log Change Notices: Not Supported 00:07:53.155 Controller Attributes 00:07:53.155 128-bit Host Identifier: Not Supported 00:07:53.155 Non-Operational Permissive Mode: Not Supported 00:07:53.155 NVM Sets: Not Supported 00:07:53.155 Read Recovery Levels: Not Supported 00:07:53.155 Endurance Groups: Supported 00:07:53.155 Predictable Latency Mode: Not Supported 00:07:53.155 Traffic Based Keep ALive: Not Supported 00:07:53.155 Namespace Granularity: Not Supported 00:07:53.155 SQ Associations: Not Supported 00:07:53.155 UUID List: Not Supported 00:07:53.155 Multi-Domain Subsystem: Not Supported 00:07:53.155 Fixed Capacity Management: Not Supported 00:07:53.155 Variable Capacity Management: Not Supported 00:07:53.155 Delete Endurance Group: Not Supported 00:07:53.155 Delete NVM Set: Not Supported 00:07:53.155 Extended LBA Formats Supported: Supported 00:07:53.155 Flexible Data Placement Supported: Supported 00:07:53.155 00:07:53.155 Controller Memory Buffer Support 00:07:53.155 ================================ 00:07:53.155 Supported: No 00:07:53.155 00:07:53.155 Persistent Memory Region Support 00:07:53.155 ================================ 00:07:53.155 Supported: No 00:07:53.155 00:07:53.155 Admin Command Set Attributes 00:07:53.155 ============================ 00:07:53.155 Security Send/Receive: Not Supported 00:07:53.155 Format NVM: Supported 00:07:53.155 Firmware Activate/Download: Not Supported 00:07:53.155 Namespace Management: Supported 00:07:53.155 Device Self-Test: Not Supported 00:07:53.155 Directives: Supported 00:07:53.155 NVMe-MI: Not Supported 00:07:53.155 Virtualization Management: Not Supported 00:07:53.155 Doorbell Buffer Config: Supported 00:07:53.155 Get LBA Status Capability: Not Supported 00:07:53.155 Command & Feature Lockdown Capability: Not Supported 00:07:53.155 Abort Command Limit: 4 00:07:53.155 Async Event Request Limit: 4 00:07:53.155 Number of Firmware Slots: N/A 00:07:53.155 Firmware Slot 1 Read-Only: N/A 00:07:53.155 Firmware Activation Without Reset: N/A 00:07:53.155 Multiple Update Detection Support: N/A 00:07:53.155 Firmware Update Granularity: No Information Provided 00:07:53.155 Per-Namespace SMART Log: Yes 00:07:53.155 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.155 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:53.155 Command Effects Log Page: Supported 00:07:53.155 Get Log Page Extended Data: Supported 00:07:53.155 Telemetry Log Pages: Not Supported 00:07:53.155 Persistent Event Log Pages: Not Supported 00:07:53.155 Supported Log Pages Log Page: May Support 00:07:53.155 Commands Supported & Effects Log Page: Not Supported 00:07:53.155 Feature Identifiers & Effects Log Page:May Support 00:07:53.155 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.155 Data Area 4 for Telemetry Log: Not Supported 00:07:53.155 Error Log Page Entries Supported: 1 00:07:53.155 Keep Alive: Not Supported 00:07:53.155 00:07:53.155 NVM Command Set Attributes 00:07:53.155 ========================== 00:07:53.155 Submission Queue Entry Size 00:07:53.156 Max: 64 00:07:53.156 Min: 64 00:07:53.156 Completion Queue Entry Size 00:07:53.156 Max: 16 00:07:53.156 Min: 16 00:07:53.156 Number of Namespaces: 256 00:07:53.156 Compare Command: Supported 00:07:53.156 Write Uncorrectable Command: Not Supported 00:07:53.156 Dataset Management Command: Supported 00:07:53.156 Write Zeroes Command: Supported 00:07:53.156 Set Features Save Field: Supported 00:07:53.156 Reservations: Not Supported 00:07:53.156 Timestamp: Supported 00:07:53.156 Copy: Supported 00:07:53.156 Volatile Write Cache: Present 00:07:53.156 Atomic Write Unit (Normal): 1 00:07:53.156 Atomic Write Unit (PFail): 1 00:07:53.156 Atomic Compare & Write Unit: 1 00:07:53.156 Fused Compare & Write: Not Supported 00:07:53.156 Scatter-Gather List 00:07:53.156 SGL Command Set: Supported 00:07:53.156 SGL Keyed: Not Supported 00:07:53.156 SGL Bit Bucket Descriptor: Not Supported 00:07:53.156 SGL Metadata Pointer: Not Supported 00:07:53.156 Oversized SGL: Not Supported 00:07:53.156 SGL Metadata Address: Not Supported 00:07:53.156 SGL Offset: Not Supported 00:07:53.156 Transport SGL Data Block: Not Supported 00:07:53.156 Replay Protected Memory Block: Not Supported 00:07:53.156 00:07:53.156 Firmware Slot Information 00:07:53.156 ========================= 00:07:53.156 Active slot: 1 00:07:53.156 Slot 1 Firmware Revision: 1.0 00:07:53.156 00:07:53.156 00:07:53.156 Commands Supported and Effects 00:07:53.156 ============================== 00:07:53.156 Admin Commands 00:07:53.156 -------------- 00:07:53.156 Delete I/O Submission Queue (00h): Supported 00:07:53.156 Create I/O Submission Queue (01h): Supported 00:07:53.156 Get Log Page (02h): Supported 00:07:53.156 Delete I/O Completion Queue (04h): Supported 00:07:53.156 Create I/O Completion Queue (05h): Supported 00:07:53.156 Identify (06h): Supported 00:07:53.156 Abort (08h): Supported 00:07:53.156 Set Features (09h): Supported 00:07:53.156 Get Features (0Ah): Supported 00:07:53.156 Asynchronous Event Request (0Ch): Supported 00:07:53.156 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.156 Directive Send (19h): Supported 00:07:53.156 Directive Receive (1Ah): Supported 00:07:53.156 Virtualization Management (1Ch): Supported 00:07:53.156 Doorbell Buffer Config (7Ch): Supported 00:07:53.156 Format NVM (80h): Supported LBA-Change 00:07:53.156 I/O Commands 00:07:53.156 ------------ 00:07:53.156 Flush (00h): Supported LBA-Change 00:07:53.156 Write (01h): Supported LBA-Change 00:07:53.156 Read (02h): Supported 00:07:53.156 Compare (05h): Supported 00:07:53.156 Write Zeroes (08h): Supported LBA-Change 00:07:53.156 Dataset Management (09h): Supported LBA-Change 00:07:53.156 Unknown (0Ch): Supported 00:07:53.156 Unknown (12h): Supported 00:07:53.156 Copy (19h): Supported LBA-Change 00:07:53.156 Unknown (1Dh): Supported LBA-Change 00:07:53.156 00:07:53.156 Error Log 00:07:53.156 ========= 00:07:53.156 00:07:53.156 Arbitration 00:07:53.156 =========== 00:07:53.156 Arbitration Burst: no limit 00:07:53.156 00:07:53.156 Power Management 00:07:53.156 ================ 00:07:53.156 Number of Power States: 1 00:07:53.156 Current Power State: Power State #0 00:07:53.156 Power State #0: 00:07:53.156 Max Power: 25.00 W 00:07:53.156 Non-Operational State: Operational 00:07:53.156 Entry Latency: 16 microseconds 00:07:53.156 Exit Latency: 4 microseconds 00:07:53.156 Relative Read Throughput: 0 00:07:53.156 Relative Read Latency: 0 00:07:53.156 Relative Write Throughput: 0 00:07:53.156 Relative Write Latency: 0 00:07:53.156 Idle Power: Not Reported 00:07:53.156 Active Power: Not Reported 00:07:53.156 Non-Operational Permissive Mode: Not Supported 00:07:53.156 00:07:53.156 Health Information 00:07:53.156 ================== 00:07:53.156 Critical Warnings: 00:07:53.156 Available Spare Space: OK 00:07:53.156 Temperature: OK 00:07:53.156 Device Reliability: OK 00:07:53.156 Read Only: No 00:07:53.156 Volatile Memory Backup: OK 00:07:53.156 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.156 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.156 Available Spare: 0% 00:07:53.156 Available Spare Threshold: 0% 00:07:53.156 Life Percentage Used: 0% 00:07:53.156 Data Units Read: 832 00:07:53.156 Data Units Written: 761 00:07:53.156 Host Read Commands: 37237 00:07:53.156 Host Write Commands: 36660 00:07:53.156 Controller Busy Time: 0 minutes 00:07:53.156 Power Cycles: 0 00:07:53.156 Power On Hours: 0 hours 00:07:53.156 Unsafe Shutdowns: 0 00:07:53.156 Unrecoverable Media Errors: 0 00:07:53.156 Lifetime Error Log Entries: 0 00:07:53.156 Warning Temperature Time: 0 minutes 00:07:53.156 Critical Temperature Time: 0 minutes 00:07:53.156 00:07:53.156 Number of Queues 00:07:53.156 ================ 00:07:53.156 Number of I/O Submission Queues: 64 00:07:53.156 Number of I/O Completion Queues: 64 00:07:53.156 00:07:53.156 ZNS Specific Controller Data 00:07:53.156 ============================ 00:07:53.156 Zone Append Size Limit: 0 00:07:53.156 00:07:53.156 00:07:53.156 Active Namespaces 00:07:53.156 ================= 00:07:53.156 Namespace ID:1 00:07:53.156 Error Recovery Timeout: Unlimited 00:07:53.156 Command Set Identifier: NVM (00h) 00:07:53.156 Deallocate: Supported 00:07:53.156 Deallocated/Unwritten Error: Supported 00:07:53.156 Deallocated Read Value: All 0x00 00:07:53.156 Deallocate in Write Zeroes: Not Supported 00:07:53.156 Deallocated Guard Field: 0xFFFF 00:07:53.156 Flush: Supported 00:07:53.156 Reservation: Not Supported 00:07:53.156 Namespace Sharing Capabilities: Multiple Controllers 00:07:53.156 Size (in LBAs): 262144 (1GiB) 00:07:53.156 Capacity (in LBAs): 262144 (1GiB) 00:07:53.156 Utilization (in LBAs): 262144 (1GiB) 00:07:53.156 Thin Provisioning: Not Supported 00:07:53.156 Per-NS Atomic Units: No 00:07:53.156 Maximum Single Source Range Length: 128 00:07:53.156 Maximum Copy Length: 128 00:07:53.156 Maximum Source Range Count: 128 00:07:53.156 NGUID/EUI64 Never Reused: No 00:07:53.156 Namespace Write Protected: No 00:07:53.156 Endurance group ID: 1 00:07:53.156 Number of LBA Formats: 8 00:07:53.156 Current LBA Format: LBA Format #04 00:07:53.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.156 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.156 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.156 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.156 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.156 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.156 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.156 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.156 00:07:53.156 Get Feature FDP: 00:07:53.156 ================ 00:07:53.156 Enabled: Yes 00:07:53.156 FDP configuration index: 0 00:07:53.156 00:07:53.156 FDP configurations log page 00:07:53.156 =========================== 00:07:53.156 Number of FDP configurations: 1 00:07:53.156 Version: 0 00:07:53.156 Size: 112 00:07:53.156 FDP Configuration Descriptor: 0 00:07:53.156 Descriptor Size: 96 00:07:53.156 Reclaim Group Identifier format: 2 00:07:53.156 FDP Volatile Write Cache: Not Present 00:07:53.156 FDP Configuration: Valid 00:07:53.156 Vendor Specific Size: 0 00:07:53.156 Number of Reclaim Groups: 2 00:07:53.156 Number of Recalim Unit Handles: 8 00:07:53.156 Max Placement Identifiers: 128 00:07:53.156 Number of Namespaces Suppprted: 256 00:07:53.156 Reclaim unit Nominal Size: 6000000 bytes 00:07:53.156 Estimated Reclaim Unit Time Limit: Not Reported 00:07:53.156 RUH Desc #000: RUH Type: Initially Isolated 00:07:53.156 RUH Desc #001: RUH Type: Initially Isolated 00:07:53.156 RUH Desc #002: RUH Type: Initially Isolated 00:07:53.156 RUH Desc #003: RUH Type: Initially Isolated 00:07:53.156 RUH Desc #004: RUH Type: Initially Isolated 00:07:53.156 RUH Desc #005: RUH Type: Initially Isolated 00:07:53.156 RUH Desc #006: RUH Type: Initially Isolated 00:07:53.156 RUH Desc #007: RUH Type: Initially Isolated 00:07:53.156 00:07:53.156 FDP reclaim unit handle usage log page 00:07:53.156 ====================================== 00:07:53.156 Number of Reclaim Unit Handles: 8 00:07:53.156 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:53.156 RUH Usage Desc #001: RUH Attributes: Unused 00:07:53.156 RUH Usage Desc #002: RUH Attributes: Unused 00:07:53.156 RUH Usage Desc #003: RUH Attributes: Unused 00:07:53.156 RUH Usage Desc #004: RUH Attributes: Unused 00:07:53.156 RUH Usage Desc #005: RUH Attributes: Unused 00:07:53.156 RUH Usage Desc #006: RUH Attributes: Unused 00:07:53.156 RUH Usage Desc #007: RUH Attributes: Unused 00:07:53.156 00:07:53.156 FDP statistics log page 00:07:53.156 ======================= 00:07:53.156 Host bytes with metadata written: 474062848 00:07:53.156 Media bytes with metadata written: 474116096 00:07:53.156 Media bytes erased: 0 00:07:53.156 00:07:53.156 FDP events log page 00:07:53.156 =================== 00:07:53.156 Number of FDP events: 0 00:07:53.156 00:07:53.156 NVM Specific Namespace Data 00:07:53.156 =========================== 00:07:53.156 Logical Block Storage Tag Mask: 0 00:07:53.156 Protection Information Capabilities: 00:07:53.156 16b Guard Protection Information Storage Tag Support: No 00:07:53.157 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.157 Storage Tag Check Read Support: No 00:07:53.157 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.157 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.157 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.157 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.157 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.157 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.157 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.157 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.157 00:07:53.157 real 0m0.990s 00:07:53.157 user 0m0.361s 00:07:53.157 sys 0m0.416s 00:07:53.157 10:39:52 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.157 10:39:52 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:53.157 ************************************ 00:07:53.157 END TEST nvme_identify 00:07:53.157 ************************************ 00:07:53.157 10:39:53 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:53.157 10:39:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.157 10:39:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.157 10:39:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.157 ************************************ 00:07:53.157 START TEST nvme_perf 00:07:53.157 ************************************ 00:07:53.157 10:39:53 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:53.157 10:39:53 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:54.546 Initializing NVMe Controllers 00:07:54.547 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.547 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.547 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.547 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.547 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:54.547 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:54.547 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:54.547 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:54.547 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:54.547 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:54.547 Initialization complete. Launching workers. 00:07:54.547 ======================================================== 00:07:54.547 Latency(us) 00:07:54.547 Device Information : IOPS MiB/s Average min max 00:07:54.547 PCIE (0000:00:13.0) NSID 1 from core 0: 7534.49 88.29 17002.81 9900.72 37149.14 00:07:54.547 PCIE (0000:00:10.0) NSID 1 from core 0: 7534.49 88.29 16993.92 8903.32 36926.80 00:07:54.547 PCIE (0000:00:11.0) NSID 1 from core 0: 7534.49 88.29 16979.66 8002.30 36540.03 00:07:54.547 PCIE (0000:00:12.0) NSID 1 from core 0: 7534.49 88.29 16963.97 6494.40 37374.46 00:07:54.547 PCIE (0000:00:12.0) NSID 2 from core 0: 7534.49 88.29 16947.78 5764.96 37353.71 00:07:54.547 PCIE (0000:00:12.0) NSID 3 from core 0: 7598.34 89.04 16788.57 4855.80 29894.91 00:07:54.547 ======================================================== 00:07:54.547 Total : 45270.79 530.52 16945.90 4855.80 37374.46 00:07:54.547 00:07:54.547 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.547 ================================================================================= 00:07:54.547 1.00000% : 12401.428us 00:07:54.547 10.00000% : 14619.569us 00:07:54.547 25.00000% : 15426.166us 00:07:54.547 50.00000% : 16636.062us 00:07:54.547 75.00000% : 18148.431us 00:07:54.547 90.00000% : 19559.975us 00:07:54.547 95.00000% : 20366.572us 00:07:54.547 98.00000% : 21173.169us 00:07:54.547 99.00000% : 29844.086us 00:07:54.547 99.50000% : 36498.511us 00:07:54.547 99.90000% : 37103.458us 00:07:54.547 99.99000% : 37305.108us 00:07:54.547 99.99900% : 37305.108us 00:07:54.547 99.99990% : 37305.108us 00:07:54.547 99.99999% : 37305.108us 00:07:54.547 00:07:54.547 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.547 ================================================================================= 00:07:54.547 1.00000% : 12653.489us 00:07:54.547 10.00000% : 14619.569us 00:07:54.547 25.00000% : 15526.991us 00:07:54.547 50.00000% : 16535.237us 00:07:54.547 75.00000% : 18148.431us 00:07:54.547 90.00000% : 19761.625us 00:07:54.547 95.00000% : 20467.397us 00:07:54.547 98.00000% : 21072.345us 00:07:54.547 99.00000% : 30045.735us 00:07:54.547 99.50000% : 36296.862us 00:07:54.547 99.90000% : 36901.809us 00:07:54.547 99.99000% : 37103.458us 00:07:54.547 99.99900% : 37103.458us 00:07:54.547 99.99990% : 37103.458us 00:07:54.547 99.99999% : 37103.458us 00:07:54.547 00:07:54.547 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.547 ================================================================================= 00:07:54.547 1.00000% : 13308.849us 00:07:54.547 10.00000% : 14720.394us 00:07:54.547 25.00000% : 15526.991us 00:07:54.547 50.00000% : 16535.237us 00:07:54.547 75.00000% : 18047.606us 00:07:54.547 90.00000% : 19862.449us 00:07:54.547 95.00000% : 20467.397us 00:07:54.547 98.00000% : 21273.994us 00:07:54.547 99.00000% : 29440.788us 00:07:54.547 99.50000% : 36095.212us 00:07:54.547 99.90000% : 36498.511us 00:07:54.547 99.99000% : 36700.160us 00:07:54.547 99.99900% : 36700.160us 00:07:54.547 99.99990% : 36700.160us 00:07:54.547 99.99999% : 36700.160us 00:07:54.547 00:07:54.547 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.547 ================================================================================= 00:07:54.547 1.00000% : 12098.954us 00:07:54.547 10.00000% : 14619.569us 00:07:54.547 25.00000% : 15426.166us 00:07:54.547 50.00000% : 16535.237us 00:07:54.547 75.00000% : 18249.255us 00:07:54.547 90.00000% : 19761.625us 00:07:54.547 95.00000% : 20568.222us 00:07:54.547 98.00000% : 21677.292us 00:07:54.547 99.00000% : 30247.385us 00:07:54.547 99.50000% : 36901.809us 00:07:54.547 99.90000% : 37305.108us 00:07:54.547 99.99000% : 37506.757us 00:07:54.547 99.99900% : 37506.757us 00:07:54.547 99.99990% : 37506.757us 00:07:54.547 99.99999% : 37506.757us 00:07:54.547 00:07:54.547 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.547 ================================================================================= 00:07:54.547 1.00000% : 11241.945us 00:07:54.547 10.00000% : 14518.745us 00:07:54.547 25.00000% : 15426.166us 00:07:54.547 50.00000% : 16636.062us 00:07:54.547 75.00000% : 18249.255us 00:07:54.547 90.00000% : 19559.975us 00:07:54.547 95.00000% : 20568.222us 00:07:54.547 98.00000% : 21677.292us 00:07:54.547 99.00000% : 29642.437us 00:07:54.547 99.50000% : 36901.809us 00:07:54.547 99.90000% : 37305.108us 00:07:54.547 99.99000% : 37506.757us 00:07:54.547 99.99900% : 37506.757us 00:07:54.547 99.99990% : 37506.757us 00:07:54.547 99.99999% : 37506.757us 00:07:54.547 00:07:54.547 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.547 ================================================================================= 00:07:54.547 1.00000% : 10939.471us 00:07:54.547 10.00000% : 14518.745us 00:07:54.547 25.00000% : 15325.342us 00:07:54.547 50.00000% : 16535.237us 00:07:54.547 75.00000% : 18350.080us 00:07:54.547 90.00000% : 19660.800us 00:07:54.547 95.00000% : 20265.748us 00:07:54.547 98.00000% : 21072.345us 00:07:54.547 99.00000% : 23189.662us 00:07:54.547 99.50000% : 29440.788us 00:07:54.547 99.90000% : 29844.086us 00:07:54.547 99.99000% : 30045.735us 00:07:54.547 99.99900% : 30045.735us 00:07:54.547 99.99990% : 30045.735us 00:07:54.547 99.99999% : 30045.735us 00:07:54.547 00:07:54.547 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.547 ============================================================================== 00:07:54.547 Range in us Cumulative IO count 00:07:54.547 9880.812 - 9931.225: 0.0397% ( 3) 00:07:54.547 9931.225 - 9981.637: 0.0927% ( 4) 00:07:54.547 9981.637 - 10032.049: 0.1589% ( 5) 00:07:54.547 10032.049 - 10082.462: 0.2251% ( 5) 00:07:54.547 10082.462 - 10132.874: 0.2781% ( 4) 00:07:54.547 10132.874 - 10183.286: 0.3310% ( 4) 00:07:54.547 10183.286 - 10233.698: 0.3972% ( 5) 00:07:54.547 10233.698 - 10284.111: 0.4502% ( 4) 00:07:54.547 10284.111 - 10334.523: 0.5032% ( 4) 00:07:54.547 10334.523 - 10384.935: 0.5429% ( 3) 00:07:54.547 10384.935 - 10435.348: 0.5826% ( 3) 00:07:54.547 10435.348 - 10485.760: 0.6356% ( 4) 00:07:54.547 10485.760 - 10536.172: 0.6886% ( 4) 00:07:54.547 10536.172 - 10586.585: 0.7415% ( 4) 00:07:54.547 10586.585 - 10636.997: 0.8077% ( 5) 00:07:54.547 10636.997 - 10687.409: 0.8475% ( 3) 00:07:54.547 12098.954 - 12149.366: 0.8607% ( 1) 00:07:54.547 12149.366 - 12199.778: 0.8872% ( 2) 00:07:54.547 12199.778 - 12250.191: 0.9137% ( 2) 00:07:54.547 12250.191 - 12300.603: 0.9534% ( 3) 00:07:54.547 12300.603 - 12351.015: 0.9931% ( 3) 00:07:54.547 12351.015 - 12401.428: 1.0593% ( 5) 00:07:54.547 12401.428 - 12451.840: 1.1255% ( 5) 00:07:54.547 12451.840 - 12502.252: 1.1785% ( 4) 00:07:54.547 12502.252 - 12552.665: 1.2182% ( 3) 00:07:54.547 12552.665 - 12603.077: 1.2447% ( 2) 00:07:54.547 12603.077 - 12653.489: 1.2844% ( 3) 00:07:54.547 12703.902 - 12754.314: 1.3374% ( 4) 00:07:54.547 12754.314 - 12804.726: 1.3639% ( 2) 00:07:54.547 12804.726 - 12855.138: 1.3904% ( 2) 00:07:54.547 12855.138 - 12905.551: 1.4168% ( 2) 00:07:54.547 12905.551 - 13006.375: 1.4831% ( 5) 00:07:54.547 13006.375 - 13107.200: 1.5625% ( 6) 00:07:54.547 13107.200 - 13208.025: 1.6419% ( 6) 00:07:54.547 13208.025 - 13308.849: 1.7611% ( 9) 00:07:54.547 13308.849 - 13409.674: 1.9333% ( 13) 00:07:54.547 13409.674 - 13510.498: 2.1849% ( 19) 00:07:54.547 13510.498 - 13611.323: 2.5291% ( 26) 00:07:54.547 13611.323 - 13712.148: 2.8602% ( 25) 00:07:54.547 13712.148 - 13812.972: 3.2574% ( 30) 00:07:54.547 13812.972 - 13913.797: 3.7474% ( 37) 00:07:54.547 13913.797 - 14014.622: 4.3697% ( 47) 00:07:54.547 14014.622 - 14115.446: 5.0980% ( 55) 00:07:54.547 14115.446 - 14216.271: 6.0117% ( 69) 00:07:54.547 14216.271 - 14317.095: 7.2431% ( 93) 00:07:54.547 14317.095 - 14417.920: 8.4613% ( 92) 00:07:54.547 14417.920 - 14518.745: 9.8252% ( 103) 00:07:54.547 14518.745 - 14619.569: 11.1758% ( 102) 00:07:54.547 14619.569 - 14720.394: 12.6721% ( 113) 00:07:54.547 14720.394 - 14821.218: 14.1287% ( 110) 00:07:54.547 14821.218 - 14922.043: 15.5588% ( 108) 00:07:54.547 14922.043 - 15022.868: 17.2537% ( 128) 00:07:54.547 15022.868 - 15123.692: 19.0016% ( 132) 00:07:54.547 15123.692 - 15224.517: 20.9349% ( 146) 00:07:54.547 15224.517 - 15325.342: 23.1727% ( 169) 00:07:54.547 15325.342 - 15426.166: 25.4502% ( 172) 00:07:54.547 15426.166 - 15526.991: 27.4762% ( 153) 00:07:54.547 15526.991 - 15627.815: 29.6743% ( 166) 00:07:54.547 15627.815 - 15728.640: 31.8061% ( 161) 00:07:54.547 15728.640 - 15829.465: 33.7526% ( 147) 00:07:54.547 15829.465 - 15930.289: 36.1229% ( 179) 00:07:54.547 15930.289 - 16031.114: 38.6520% ( 191) 00:07:54.547 16031.114 - 16131.938: 41.0090% ( 178) 00:07:54.547 16131.938 - 16232.763: 43.3660% ( 178) 00:07:54.547 16232.763 - 16333.588: 45.7760% ( 182) 00:07:54.547 16333.588 - 16434.412: 47.9740% ( 166) 00:07:54.547 16434.412 - 16535.237: 49.8543% ( 142) 00:07:54.547 16535.237 - 16636.062: 51.7082% ( 140) 00:07:54.547 16636.062 - 16736.886: 53.3766% ( 126) 00:07:54.547 16736.886 - 16837.711: 55.0715% ( 128) 00:07:54.548 16837.711 - 16938.535: 56.7664% ( 128) 00:07:54.548 16938.535 - 17039.360: 58.4349% ( 126) 00:07:54.548 17039.360 - 17140.185: 59.9311% ( 113) 00:07:54.548 17140.185 - 17241.009: 61.2421% ( 99) 00:07:54.548 17241.009 - 17341.834: 62.5927% ( 102) 00:07:54.548 17341.834 - 17442.658: 63.9036% ( 99) 00:07:54.548 17442.658 - 17543.483: 65.2675% ( 103) 00:07:54.548 17543.483 - 17644.308: 66.8300% ( 118) 00:07:54.548 17644.308 - 17745.132: 68.3925% ( 118) 00:07:54.548 17745.132 - 17845.957: 70.0609% ( 126) 00:07:54.548 17845.957 - 17946.782: 71.7293% ( 126) 00:07:54.548 17946.782 - 18047.606: 73.3978% ( 126) 00:07:54.548 18047.606 - 18148.431: 75.0530% ( 125) 00:07:54.548 18148.431 - 18249.255: 76.7744% ( 130) 00:07:54.548 18249.255 - 18350.080: 78.1780% ( 106) 00:07:54.548 18350.080 - 18450.905: 79.4624% ( 97) 00:07:54.548 18450.905 - 18551.729: 80.6144% ( 87) 00:07:54.548 18551.729 - 18652.554: 81.8459% ( 93) 00:07:54.548 18652.554 - 18753.378: 82.9184% ( 81) 00:07:54.548 18753.378 - 18854.203: 83.8851% ( 73) 00:07:54.548 18854.203 - 18955.028: 84.6796% ( 60) 00:07:54.548 18955.028 - 19055.852: 85.4740% ( 60) 00:07:54.548 19055.852 - 19156.677: 86.3877% ( 69) 00:07:54.548 19156.677 - 19257.502: 87.4470% ( 80) 00:07:54.548 19257.502 - 19358.326: 88.4004% ( 72) 00:07:54.548 19358.326 - 19459.151: 89.2876% ( 67) 00:07:54.548 19459.151 - 19559.975: 90.0424% ( 57) 00:07:54.548 19559.975 - 19660.800: 90.8104% ( 58) 00:07:54.548 19660.800 - 19761.625: 91.4592% ( 49) 00:07:54.548 19761.625 - 19862.449: 92.1213% ( 50) 00:07:54.548 19862.449 - 19963.274: 92.7834% ( 50) 00:07:54.548 19963.274 - 20064.098: 93.5779% ( 60) 00:07:54.548 20064.098 - 20164.923: 94.2532% ( 51) 00:07:54.548 20164.923 - 20265.748: 94.7564% ( 38) 00:07:54.548 20265.748 - 20366.572: 95.2198% ( 35) 00:07:54.548 20366.572 - 20467.397: 95.6965% ( 36) 00:07:54.548 20467.397 - 20568.222: 96.1997% ( 38) 00:07:54.548 20568.222 - 20669.046: 96.6896% ( 37) 00:07:54.548 20669.046 - 20769.871: 97.1133% ( 32) 00:07:54.548 20769.871 - 20870.695: 97.4709% ( 27) 00:07:54.548 20870.695 - 20971.520: 97.6960% ( 17) 00:07:54.548 20971.520 - 21072.345: 97.9211% ( 17) 00:07:54.548 21072.345 - 21173.169: 98.0403% ( 9) 00:07:54.548 21173.169 - 21273.994: 98.1065% ( 5) 00:07:54.548 21273.994 - 21374.818: 98.1594% ( 4) 00:07:54.548 21374.818 - 21475.643: 98.2256% ( 5) 00:07:54.548 21475.643 - 21576.468: 98.2918% ( 5) 00:07:54.548 21576.468 - 21677.292: 98.3051% ( 1) 00:07:54.548 28835.840 - 29037.489: 98.3581% ( 4) 00:07:54.548 29037.489 - 29239.138: 98.5302% ( 13) 00:07:54.548 29239.138 - 29440.788: 98.7023% ( 13) 00:07:54.548 29440.788 - 29642.437: 98.8745% ( 13) 00:07:54.548 29642.437 - 29844.086: 99.0466% ( 13) 00:07:54.548 29844.086 - 30045.735: 99.1525% ( 8) 00:07:54.548 35893.563 - 36095.212: 99.1923% ( 3) 00:07:54.548 36095.212 - 36296.862: 99.3644% ( 13) 00:07:54.548 36296.862 - 36498.511: 99.5233% ( 12) 00:07:54.548 36498.511 - 36700.160: 99.6690% ( 11) 00:07:54.548 36700.160 - 36901.809: 99.8146% ( 11) 00:07:54.548 36901.809 - 37103.458: 99.9735% ( 12) 00:07:54.548 37103.458 - 37305.108: 100.0000% ( 2) 00:07:54.548 00:07:54.548 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.548 ============================================================================== 00:07:54.548 Range in us Cumulative IO count 00:07:54.548 8872.566 - 8922.978: 0.0265% ( 2) 00:07:54.548 8922.978 - 8973.391: 0.0662% ( 3) 00:07:54.548 8973.391 - 9023.803: 0.1059% ( 3) 00:07:54.548 9023.803 - 9074.215: 0.1589% ( 4) 00:07:54.548 9074.215 - 9124.628: 0.1721% ( 1) 00:07:54.548 9124.628 - 9175.040: 0.2648% ( 7) 00:07:54.548 9175.040 - 9225.452: 0.2913% ( 2) 00:07:54.548 9225.452 - 9275.865: 0.3443% ( 4) 00:07:54.548 9275.865 - 9326.277: 0.3972% ( 4) 00:07:54.548 9326.277 - 9376.689: 0.4105% ( 1) 00:07:54.548 9376.689 - 9427.102: 0.5297% ( 9) 00:07:54.548 9427.102 - 9477.514: 0.5429% ( 1) 00:07:54.548 9477.514 - 9527.926: 0.5694% ( 2) 00:07:54.548 9527.926 - 9578.338: 0.6488% ( 6) 00:07:54.548 9578.338 - 9628.751: 0.6621% ( 1) 00:07:54.548 9628.751 - 9679.163: 0.7150% ( 4) 00:07:54.548 9679.163 - 9729.575: 0.8342% ( 9) 00:07:54.548 9729.575 - 9779.988: 0.8475% ( 1) 00:07:54.548 12351.015 - 12401.428: 0.8607% ( 1) 00:07:54.548 12401.428 - 12451.840: 0.8872% ( 2) 00:07:54.548 12451.840 - 12502.252: 0.9004% ( 1) 00:07:54.548 12502.252 - 12552.665: 0.9401% ( 3) 00:07:54.548 12552.665 - 12603.077: 0.9799% ( 3) 00:07:54.548 12603.077 - 12653.489: 1.0328% ( 4) 00:07:54.548 12703.902 - 12754.314: 1.0593% ( 2) 00:07:54.548 12754.314 - 12804.726: 1.0990% ( 3) 00:07:54.548 12804.726 - 12855.138: 1.1255% ( 2) 00:07:54.548 12855.138 - 12905.551: 1.1653% ( 3) 00:07:54.548 12905.551 - 13006.375: 1.2315% ( 5) 00:07:54.548 13006.375 - 13107.200: 1.3242% ( 7) 00:07:54.548 13107.200 - 13208.025: 1.4831% ( 12) 00:07:54.548 13208.025 - 13308.849: 1.6022% ( 9) 00:07:54.548 13308.849 - 13409.674: 1.9068% ( 23) 00:07:54.548 13409.674 - 13510.498: 2.0657% ( 12) 00:07:54.548 13510.498 - 13611.323: 2.5689% ( 38) 00:07:54.548 13611.323 - 13712.148: 2.8999% ( 25) 00:07:54.548 13712.148 - 13812.972: 3.4163% ( 39) 00:07:54.548 13812.972 - 13913.797: 4.0122% ( 45) 00:07:54.548 13913.797 - 14014.622: 4.4756% ( 35) 00:07:54.548 14014.622 - 14115.446: 5.3893% ( 69) 00:07:54.548 14115.446 - 14216.271: 6.4486% ( 80) 00:07:54.548 14216.271 - 14317.095: 7.3093% ( 65) 00:07:54.548 14317.095 - 14417.920: 8.2760% ( 73) 00:07:54.548 14417.920 - 14518.745: 9.5869% ( 99) 00:07:54.548 14518.745 - 14619.569: 10.9375% ( 102) 00:07:54.548 14619.569 - 14720.394: 12.2749% ( 101) 00:07:54.548 14720.394 - 14821.218: 13.7844% ( 114) 00:07:54.548 14821.218 - 14922.043: 15.3204% ( 116) 00:07:54.548 14922.043 - 15022.868: 17.0154% ( 128) 00:07:54.548 15022.868 - 15123.692: 19.2399% ( 168) 00:07:54.548 15123.692 - 15224.517: 21.0540% ( 137) 00:07:54.548 15224.517 - 15325.342: 23.0800% ( 153) 00:07:54.548 15325.342 - 15426.166: 24.9470% ( 141) 00:07:54.548 15426.166 - 15526.991: 27.5424% ( 196) 00:07:54.548 15526.991 - 15627.815: 29.5948% ( 155) 00:07:54.548 15627.815 - 15728.640: 31.8988% ( 174) 00:07:54.548 15728.640 - 15829.465: 34.2691% ( 179) 00:07:54.548 15829.465 - 15930.289: 36.7850% ( 190) 00:07:54.548 15930.289 - 16031.114: 38.9831% ( 166) 00:07:54.548 16031.114 - 16131.938: 41.5122% ( 191) 00:07:54.548 16131.938 - 16232.763: 43.5646% ( 155) 00:07:54.548 16232.763 - 16333.588: 45.6965% ( 161) 00:07:54.548 16333.588 - 16434.412: 48.0932% ( 181) 00:07:54.548 16434.412 - 16535.237: 50.5694% ( 187) 00:07:54.548 16535.237 - 16636.062: 52.7675% ( 166) 00:07:54.548 16636.062 - 16736.886: 55.1245% ( 178) 00:07:54.548 16736.886 - 16837.711: 57.2564% ( 161) 00:07:54.548 16837.711 - 16938.535: 58.7791% ( 115) 00:07:54.548 16938.535 - 17039.360: 60.7124% ( 146) 00:07:54.548 17039.360 - 17140.185: 62.0763% ( 103) 00:07:54.548 17140.185 - 17241.009: 63.7579% ( 127) 00:07:54.548 17241.009 - 17341.834: 65.3337% ( 119) 00:07:54.548 17341.834 - 17442.658: 66.3136% ( 74) 00:07:54.548 17442.658 - 17543.483: 67.7701% ( 110) 00:07:54.548 17543.483 - 17644.308: 69.1208% ( 102) 00:07:54.548 17644.308 - 17745.132: 70.4052% ( 97) 00:07:54.548 17745.132 - 17845.957: 71.7956% ( 105) 00:07:54.548 17845.957 - 17946.782: 72.8681% ( 81) 00:07:54.548 17946.782 - 18047.606: 74.2717% ( 106) 00:07:54.548 18047.606 - 18148.431: 75.4767% ( 91) 00:07:54.548 18148.431 - 18249.255: 76.3771% ( 68) 00:07:54.548 18249.255 - 18350.080: 77.3305% ( 72) 00:07:54.548 18350.080 - 18450.905: 78.3236% ( 75) 00:07:54.548 18450.905 - 18551.729: 79.5154% ( 90) 00:07:54.548 18551.729 - 18652.554: 80.4688% ( 72) 00:07:54.548 18652.554 - 18753.378: 81.5810% ( 84) 00:07:54.548 18753.378 - 18854.203: 82.4550% ( 66) 00:07:54.548 18854.203 - 18955.028: 83.1700% ( 54) 00:07:54.548 18955.028 - 19055.852: 83.9778% ( 61) 00:07:54.548 19055.852 - 19156.677: 84.6266% ( 49) 00:07:54.548 19156.677 - 19257.502: 85.5403% ( 69) 00:07:54.548 19257.502 - 19358.326: 86.1494% ( 46) 00:07:54.548 19358.326 - 19459.151: 87.1690% ( 77) 00:07:54.548 19459.151 - 19559.975: 88.0032% ( 63) 00:07:54.548 19559.975 - 19660.800: 89.1287% ( 85) 00:07:54.548 19660.800 - 19761.625: 90.0556% ( 70) 00:07:54.548 19761.625 - 19862.449: 90.9958% ( 71) 00:07:54.548 19862.449 - 19963.274: 91.7108% ( 54) 00:07:54.548 19963.274 - 20064.098: 92.4788% ( 58) 00:07:54.548 20064.098 - 20164.923: 93.4190% ( 71) 00:07:54.548 20164.923 - 20265.748: 93.8957% ( 36) 00:07:54.548 20265.748 - 20366.572: 94.6239% ( 55) 00:07:54.548 20366.572 - 20467.397: 95.4714% ( 64) 00:07:54.548 20467.397 - 20568.222: 95.9349% ( 35) 00:07:54.548 20568.222 - 20669.046: 96.5440% ( 46) 00:07:54.548 20669.046 - 20769.871: 97.1133% ( 43) 00:07:54.548 20769.871 - 20870.695: 97.5768% ( 35) 00:07:54.548 20870.695 - 20971.520: 97.9078% ( 25) 00:07:54.548 20971.520 - 21072.345: 98.0138% ( 8) 00:07:54.548 21072.345 - 21173.169: 98.0800% ( 5) 00:07:54.548 21173.169 - 21273.994: 98.1329% ( 4) 00:07:54.548 21273.994 - 21374.818: 98.1992% ( 5) 00:07:54.548 21374.818 - 21475.643: 98.2654% ( 5) 00:07:54.548 21475.643 - 21576.468: 98.3051% ( 3) 00:07:54.548 28835.840 - 29037.489: 98.3316% ( 2) 00:07:54.548 29037.489 - 29239.138: 98.4905% ( 12) 00:07:54.548 29239.138 - 29440.788: 98.6229% ( 10) 00:07:54.548 29440.788 - 29642.437: 98.7553% ( 10) 00:07:54.548 29642.437 - 29844.086: 98.9010% ( 11) 00:07:54.548 29844.086 - 30045.735: 99.0334% ( 10) 00:07:54.548 30045.735 - 30247.385: 99.1525% ( 9) 00:07:54.548 35691.914 - 35893.563: 99.2717% ( 9) 00:07:54.548 35893.563 - 36095.212: 99.4174% ( 11) 00:07:54.548 36095.212 - 36296.862: 99.5233% ( 8) 00:07:54.548 36296.862 - 36498.511: 99.6690% ( 11) 00:07:54.548 36498.511 - 36700.160: 99.8543% ( 14) 00:07:54.548 36700.160 - 36901.809: 99.9735% ( 9) 00:07:54.548 36901.809 - 37103.458: 100.0000% ( 2) 00:07:54.548 00:07:54.548 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.549 ============================================================================== 00:07:54.549 Range in us Cumulative IO count 00:07:54.549 7965.145 - 8015.557: 0.0397% ( 3) 00:07:54.549 8015.557 - 8065.969: 0.1324% ( 7) 00:07:54.549 8065.969 - 8116.382: 0.1854% ( 4) 00:07:54.549 8116.382 - 8166.794: 0.2251% ( 3) 00:07:54.549 8166.794 - 8217.206: 0.2913% ( 5) 00:07:54.549 8217.206 - 8267.618: 0.3575% ( 5) 00:07:54.549 8267.618 - 8318.031: 0.4105% ( 4) 00:07:54.549 8318.031 - 8368.443: 0.4635% ( 4) 00:07:54.549 8368.443 - 8418.855: 0.5297% ( 5) 00:07:54.549 8418.855 - 8469.268: 0.5826% ( 4) 00:07:54.549 8469.268 - 8519.680: 0.6224% ( 3) 00:07:54.549 8519.680 - 8570.092: 0.6753% ( 4) 00:07:54.549 8570.092 - 8620.505: 0.7283% ( 4) 00:07:54.549 8620.505 - 8670.917: 0.7945% ( 5) 00:07:54.549 8670.917 - 8721.329: 0.8342% ( 3) 00:07:54.549 8721.329 - 8771.742: 0.8475% ( 1) 00:07:54.549 13006.375 - 13107.200: 0.8872% ( 3) 00:07:54.549 13107.200 - 13208.025: 0.9534% ( 5) 00:07:54.549 13208.025 - 13308.849: 1.1255% ( 13) 00:07:54.549 13308.849 - 13409.674: 1.2977% ( 13) 00:07:54.549 13409.674 - 13510.498: 1.5625% ( 20) 00:07:54.549 13510.498 - 13611.323: 2.0789% ( 39) 00:07:54.549 13611.323 - 13712.148: 2.6086% ( 40) 00:07:54.549 13712.148 - 13812.972: 3.2707% ( 50) 00:07:54.549 13812.972 - 13913.797: 3.8268% ( 42) 00:07:54.549 13913.797 - 14014.622: 4.4227% ( 45) 00:07:54.549 14014.622 - 14115.446: 5.1112% ( 52) 00:07:54.549 14115.446 - 14216.271: 5.6939% ( 44) 00:07:54.549 14216.271 - 14317.095: 6.5678% ( 66) 00:07:54.549 14317.095 - 14417.920: 7.6139% ( 79) 00:07:54.549 14417.920 - 14518.745: 8.6467% ( 78) 00:07:54.549 14518.745 - 14619.569: 9.7060% ( 80) 00:07:54.549 14619.569 - 14720.394: 10.9507% ( 94) 00:07:54.549 14720.394 - 14821.218: 12.3543% ( 106) 00:07:54.549 14821.218 - 14922.043: 13.6520% ( 98) 00:07:54.549 14922.043 - 15022.868: 15.3734% ( 130) 00:07:54.549 15022.868 - 15123.692: 17.0816% ( 129) 00:07:54.549 15123.692 - 15224.517: 18.9221% ( 139) 00:07:54.549 15224.517 - 15325.342: 21.1202% ( 166) 00:07:54.549 15325.342 - 15426.166: 23.6096% ( 188) 00:07:54.549 15426.166 - 15526.991: 26.2315% ( 198) 00:07:54.549 15526.991 - 15627.815: 28.8400% ( 197) 00:07:54.549 15627.815 - 15728.640: 31.4619% ( 198) 00:07:54.549 15728.640 - 15829.465: 33.9910% ( 191) 00:07:54.549 15829.465 - 15930.289: 36.4672% ( 187) 00:07:54.549 15930.289 - 16031.114: 39.2082% ( 207) 00:07:54.549 16031.114 - 16131.938: 41.9359% ( 206) 00:07:54.549 16131.938 - 16232.763: 44.5842% ( 200) 00:07:54.549 16232.763 - 16333.588: 47.3914% ( 212) 00:07:54.549 16333.588 - 16434.412: 49.8941% ( 189) 00:07:54.549 16434.412 - 16535.237: 52.2775% ( 180) 00:07:54.549 16535.237 - 16636.062: 54.4094% ( 161) 00:07:54.549 16636.062 - 16736.886: 56.4619% ( 155) 00:07:54.549 16736.886 - 16837.711: 58.2627% ( 136) 00:07:54.549 16837.711 - 16938.535: 59.9047% ( 124) 00:07:54.549 16938.535 - 17039.360: 61.5069% ( 121) 00:07:54.549 17039.360 - 17140.185: 62.8708% ( 103) 00:07:54.549 17140.185 - 17241.009: 64.2082% ( 101) 00:07:54.549 17241.009 - 17341.834: 65.9163% ( 129) 00:07:54.549 17341.834 - 17442.658: 67.6377% ( 130) 00:07:54.549 17442.658 - 17543.483: 69.3194% ( 127) 00:07:54.549 17543.483 - 17644.308: 70.6303% ( 99) 00:07:54.549 17644.308 - 17745.132: 72.0207% ( 105) 00:07:54.549 17745.132 - 17845.957: 73.1992% ( 89) 00:07:54.549 17845.957 - 17946.782: 74.2717% ( 81) 00:07:54.549 17946.782 - 18047.606: 75.3972% ( 85) 00:07:54.549 18047.606 - 18148.431: 76.6552% ( 95) 00:07:54.549 18148.431 - 18249.255: 77.8204% ( 88) 00:07:54.549 18249.255 - 18350.080: 78.9989% ( 89) 00:07:54.549 18350.080 - 18450.905: 79.9126% ( 69) 00:07:54.549 18450.905 - 18551.729: 80.6806% ( 58) 00:07:54.549 18551.729 - 18652.554: 81.2897% ( 46) 00:07:54.549 18652.554 - 18753.378: 81.8459% ( 42) 00:07:54.549 18753.378 - 18854.203: 82.3755% ( 40) 00:07:54.549 18854.203 - 18955.028: 82.9184% ( 41) 00:07:54.549 18955.028 - 19055.852: 83.6997% ( 59) 00:07:54.549 19055.852 - 19156.677: 84.5207% ( 62) 00:07:54.549 19156.677 - 19257.502: 85.3284% ( 61) 00:07:54.549 19257.502 - 19358.326: 86.0567% ( 55) 00:07:54.549 19358.326 - 19459.151: 86.8247% ( 58) 00:07:54.549 19459.151 - 19559.975: 87.7119% ( 67) 00:07:54.549 19559.975 - 19660.800: 88.5593% ( 64) 00:07:54.549 19660.800 - 19761.625: 89.4333% ( 66) 00:07:54.549 19761.625 - 19862.449: 90.3867% ( 72) 00:07:54.549 19862.449 - 19963.274: 91.4460% ( 80) 00:07:54.549 19963.274 - 20064.098: 92.3596% ( 69) 00:07:54.549 20064.098 - 20164.923: 93.1409% ( 59) 00:07:54.549 20164.923 - 20265.748: 93.7897% ( 49) 00:07:54.549 20265.748 - 20366.572: 94.4518% ( 50) 00:07:54.549 20366.572 - 20467.397: 95.0609% ( 46) 00:07:54.549 20467.397 - 20568.222: 95.5906% ( 40) 00:07:54.549 20568.222 - 20669.046: 96.0805% ( 37) 00:07:54.549 20669.046 - 20769.871: 96.6102% ( 40) 00:07:54.549 20769.871 - 20870.695: 97.0604% ( 34) 00:07:54.549 20870.695 - 20971.520: 97.4576% ( 30) 00:07:54.549 20971.520 - 21072.345: 97.6827% ( 17) 00:07:54.549 21072.345 - 21173.169: 97.8681% ( 14) 00:07:54.549 21173.169 - 21273.994: 98.0270% ( 12) 00:07:54.549 21273.994 - 21374.818: 98.0932% ( 5) 00:07:54.549 21374.818 - 21475.643: 98.1462% ( 4) 00:07:54.549 21475.643 - 21576.468: 98.1992% ( 4) 00:07:54.549 21576.468 - 21677.292: 98.2654% ( 5) 00:07:54.549 21677.292 - 21778.117: 98.3051% ( 3) 00:07:54.549 28432.542 - 28634.191: 98.4640% ( 12) 00:07:54.549 28634.191 - 28835.840: 98.6229% ( 12) 00:07:54.549 28835.840 - 29037.489: 98.7553% ( 10) 00:07:54.549 29037.489 - 29239.138: 98.9142% ( 12) 00:07:54.549 29239.138 - 29440.788: 99.0863% ( 13) 00:07:54.549 29440.788 - 29642.437: 99.1525% ( 5) 00:07:54.549 35490.265 - 35691.914: 99.3114% ( 12) 00:07:54.549 35691.914 - 35893.563: 99.4703% ( 12) 00:07:54.549 35893.563 - 36095.212: 99.6292% ( 12) 00:07:54.549 36095.212 - 36296.862: 99.7881% ( 12) 00:07:54.549 36296.862 - 36498.511: 99.9603% ( 13) 00:07:54.549 36498.511 - 36700.160: 100.0000% ( 3) 00:07:54.549 00:07:54.549 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.549 ============================================================================== 00:07:54.549 Range in us Cumulative IO count 00:07:54.549 6452.775 - 6503.188: 0.0397% ( 3) 00:07:54.549 6503.188 - 6553.600: 0.0927% ( 4) 00:07:54.549 6553.600 - 6604.012: 0.1457% ( 4) 00:07:54.549 6604.012 - 6654.425: 0.1854% ( 3) 00:07:54.549 6654.425 - 6704.837: 0.2516% ( 5) 00:07:54.549 6704.837 - 6755.249: 0.3046% ( 4) 00:07:54.549 6755.249 - 6805.662: 0.3575% ( 4) 00:07:54.549 6805.662 - 6856.074: 0.4237% ( 5) 00:07:54.549 6856.074 - 6906.486: 0.4767% ( 4) 00:07:54.549 6906.486 - 6956.898: 0.5429% ( 5) 00:07:54.549 6956.898 - 7007.311: 0.5959% ( 4) 00:07:54.549 7007.311 - 7057.723: 0.6488% ( 4) 00:07:54.549 7057.723 - 7108.135: 0.7018% ( 4) 00:07:54.549 7108.135 - 7158.548: 0.7548% ( 4) 00:07:54.549 7158.548 - 7208.960: 0.8210% ( 5) 00:07:54.549 7208.960 - 7259.372: 0.8475% ( 2) 00:07:54.549 11796.480 - 11846.892: 0.8739% ( 2) 00:07:54.549 11846.892 - 11897.305: 0.9004% ( 2) 00:07:54.549 11897.305 - 11947.717: 0.9269% ( 2) 00:07:54.549 11947.717 - 11998.129: 0.9534% ( 2) 00:07:54.549 11998.129 - 12048.542: 0.9931% ( 3) 00:07:54.549 12048.542 - 12098.954: 1.0196% ( 2) 00:07:54.549 12098.954 - 12149.366: 1.0726% ( 4) 00:07:54.549 12149.366 - 12199.778: 1.1123% ( 3) 00:07:54.549 12199.778 - 12250.191: 1.1388% ( 2) 00:07:54.549 12250.191 - 12300.603: 1.1653% ( 2) 00:07:54.549 12300.603 - 12351.015: 1.1917% ( 2) 00:07:54.549 12351.015 - 12401.428: 1.2182% ( 2) 00:07:54.549 12401.428 - 12451.840: 1.2579% ( 3) 00:07:54.549 12451.840 - 12502.252: 1.2844% ( 2) 00:07:54.549 12502.252 - 12552.665: 1.3109% ( 2) 00:07:54.549 12552.665 - 12603.077: 1.3374% ( 2) 00:07:54.549 12603.077 - 12653.489: 1.3771% ( 3) 00:07:54.549 12653.489 - 12703.902: 1.4036% ( 2) 00:07:54.549 12703.902 - 12754.314: 1.4433% ( 3) 00:07:54.549 12754.314 - 12804.726: 1.4831% ( 3) 00:07:54.549 12804.726 - 12855.138: 1.5095% ( 2) 00:07:54.549 12855.138 - 12905.551: 1.5493% ( 3) 00:07:54.549 12905.551 - 13006.375: 1.6155% ( 5) 00:07:54.549 13006.375 - 13107.200: 1.7346% ( 9) 00:07:54.549 13107.200 - 13208.025: 1.8935% ( 12) 00:07:54.549 13208.025 - 13308.849: 2.1319% ( 18) 00:07:54.549 13308.849 - 13409.674: 2.4894% ( 27) 00:07:54.549 13409.674 - 13510.498: 2.8204% ( 25) 00:07:54.549 13510.498 - 13611.323: 3.1780% ( 27) 00:07:54.549 13611.323 - 13712.148: 3.5885% ( 31) 00:07:54.549 13712.148 - 13812.972: 4.0651% ( 36) 00:07:54.549 13812.972 - 13913.797: 4.6081% ( 41) 00:07:54.549 13913.797 - 14014.622: 5.1907% ( 44) 00:07:54.549 14014.622 - 14115.446: 5.8925% ( 53) 00:07:54.549 14115.446 - 14216.271: 6.6208% ( 55) 00:07:54.549 14216.271 - 14317.095: 7.3888% ( 58) 00:07:54.549 14317.095 - 14417.920: 8.5938% ( 91) 00:07:54.549 14417.920 - 14518.745: 9.7722% ( 89) 00:07:54.549 14518.745 - 14619.569: 11.0964% ( 100) 00:07:54.549 14619.569 - 14720.394: 12.5927% ( 113) 00:07:54.549 14720.394 - 14821.218: 14.2082% ( 122) 00:07:54.549 14821.218 - 14922.043: 15.8369% ( 123) 00:07:54.549 14922.043 - 15022.868: 17.6245% ( 135) 00:07:54.549 15022.868 - 15123.692: 19.4915% ( 141) 00:07:54.549 15123.692 - 15224.517: 21.2394% ( 132) 00:07:54.549 15224.517 - 15325.342: 23.1859% ( 147) 00:07:54.549 15325.342 - 15426.166: 25.3840% ( 166) 00:07:54.549 15426.166 - 15526.991: 27.7278% ( 177) 00:07:54.549 15526.991 - 15627.815: 30.2304% ( 189) 00:07:54.549 15627.815 - 15728.640: 32.9052% ( 202) 00:07:54.549 15728.640 - 15829.465: 35.5800% ( 202) 00:07:54.549 15829.465 - 15930.289: 38.0297% ( 185) 00:07:54.549 15930.289 - 16031.114: 40.5720% ( 192) 00:07:54.549 16031.114 - 16131.938: 43.0879% ( 190) 00:07:54.549 16131.938 - 16232.763: 45.3390% ( 170) 00:07:54.549 16232.763 - 16333.588: 47.3252% ( 150) 00:07:54.549 16333.588 - 16434.412: 49.3247% ( 151) 00:07:54.549 16434.412 - 16535.237: 51.3506% ( 153) 00:07:54.550 16535.237 - 16636.062: 53.2839% ( 146) 00:07:54.550 16636.062 - 16736.886: 55.0583% ( 134) 00:07:54.550 16736.886 - 16837.711: 56.6870% ( 123) 00:07:54.550 16837.711 - 16938.535: 58.4084% ( 130) 00:07:54.550 16938.535 - 17039.360: 60.0768% ( 126) 00:07:54.550 17039.360 - 17140.185: 61.6923% ( 122) 00:07:54.550 17140.185 - 17241.009: 63.2283% ( 116) 00:07:54.550 17241.009 - 17341.834: 64.7246% ( 113) 00:07:54.550 17341.834 - 17442.658: 66.2341% ( 114) 00:07:54.550 17442.658 - 17543.483: 67.6642% ( 108) 00:07:54.550 17543.483 - 17644.308: 68.9883% ( 100) 00:07:54.550 17644.308 - 17745.132: 70.2595% ( 96) 00:07:54.550 17745.132 - 17845.957: 71.4778% ( 92) 00:07:54.550 17845.957 - 17946.782: 72.5238% ( 79) 00:07:54.550 17946.782 - 18047.606: 73.4243% ( 68) 00:07:54.550 18047.606 - 18148.431: 74.4174% ( 75) 00:07:54.550 18148.431 - 18249.255: 75.5297% ( 84) 00:07:54.550 18249.255 - 18350.080: 76.6022% ( 81) 00:07:54.550 18350.080 - 18450.905: 77.7410% ( 86) 00:07:54.550 18450.905 - 18551.729: 78.8533% ( 84) 00:07:54.550 18551.729 - 18652.554: 80.0053% ( 87) 00:07:54.550 18652.554 - 18753.378: 81.1573% ( 87) 00:07:54.550 18753.378 - 18854.203: 82.3093% ( 87) 00:07:54.550 18854.203 - 18955.028: 83.6070% ( 98) 00:07:54.550 18955.028 - 19055.852: 84.7722% ( 88) 00:07:54.550 19055.852 - 19156.677: 85.7654% ( 75) 00:07:54.550 19156.677 - 19257.502: 86.6261% ( 65) 00:07:54.550 19257.502 - 19358.326: 87.4603% ( 63) 00:07:54.550 19358.326 - 19459.151: 88.2812% ( 62) 00:07:54.550 19459.151 - 19559.975: 88.9698% ( 52) 00:07:54.550 19559.975 - 19660.800: 89.6186% ( 49) 00:07:54.550 19660.800 - 19761.625: 90.3602% ( 56) 00:07:54.550 19761.625 - 19862.449: 91.1017% ( 56) 00:07:54.550 19862.449 - 19963.274: 91.7373% ( 48) 00:07:54.550 19963.274 - 20064.098: 92.3861% ( 49) 00:07:54.550 20064.098 - 20164.923: 93.0614% ( 51) 00:07:54.550 20164.923 - 20265.748: 93.6573% ( 45) 00:07:54.550 20265.748 - 20366.572: 94.2002% ( 41) 00:07:54.550 20366.572 - 20467.397: 94.8093% ( 46) 00:07:54.550 20467.397 - 20568.222: 95.3655% ( 42) 00:07:54.550 20568.222 - 20669.046: 95.8554% ( 37) 00:07:54.550 20669.046 - 20769.871: 96.2924% ( 33) 00:07:54.550 20769.871 - 20870.695: 96.5572% ( 20) 00:07:54.550 20870.695 - 20971.520: 96.8220% ( 20) 00:07:54.550 20971.520 - 21072.345: 97.0339% ( 16) 00:07:54.550 21072.345 - 21173.169: 97.2722% ( 18) 00:07:54.550 21173.169 - 21273.994: 97.4974% ( 17) 00:07:54.550 21273.994 - 21374.818: 97.6960% ( 15) 00:07:54.550 21374.818 - 21475.643: 97.8284% ( 10) 00:07:54.550 21475.643 - 21576.468: 97.9608% ( 10) 00:07:54.550 21576.468 - 21677.292: 98.1065% ( 11) 00:07:54.550 21677.292 - 21778.117: 98.1992% ( 7) 00:07:54.550 21778.117 - 21878.942: 98.2654% ( 5) 00:07:54.550 21878.942 - 21979.766: 98.3051% ( 3) 00:07:54.550 29037.489 - 29239.138: 98.3581% ( 4) 00:07:54.550 29239.138 - 29440.788: 98.4507% ( 7) 00:07:54.550 29440.788 - 29642.437: 98.5964% ( 11) 00:07:54.550 29642.437 - 29844.086: 98.7421% ( 11) 00:07:54.550 29844.086 - 30045.735: 98.9010% ( 12) 00:07:54.550 30045.735 - 30247.385: 99.0466% ( 11) 00:07:54.550 30247.385 - 30449.034: 99.1525% ( 8) 00:07:54.550 36095.212 - 36296.862: 99.1658% ( 1) 00:07:54.550 36296.862 - 36498.511: 99.3114% ( 11) 00:07:54.550 36498.511 - 36700.160: 99.4571% ( 11) 00:07:54.550 36700.160 - 36901.809: 99.6028% ( 11) 00:07:54.550 36901.809 - 37103.458: 99.7749% ( 13) 00:07:54.550 37103.458 - 37305.108: 99.9338% ( 12) 00:07:54.550 37305.108 - 37506.757: 100.0000% ( 5) 00:07:54.550 00:07:54.550 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.550 ============================================================================== 00:07:54.550 Range in us Cumulative IO count 00:07:54.550 5747.003 - 5772.209: 0.0132% ( 1) 00:07:54.550 5772.209 - 5797.415: 0.0397% ( 2) 00:07:54.550 5797.415 - 5822.622: 0.0662% ( 2) 00:07:54.550 5822.622 - 5847.828: 0.1059% ( 3) 00:07:54.550 5847.828 - 5873.034: 0.1324% ( 2) 00:07:54.550 5873.034 - 5898.240: 0.1457% ( 1) 00:07:54.550 5898.240 - 5923.446: 0.1721% ( 2) 00:07:54.550 5923.446 - 5948.652: 0.1986% ( 2) 00:07:54.550 5948.652 - 5973.858: 0.2383% ( 3) 00:07:54.550 5973.858 - 5999.065: 0.2648% ( 2) 00:07:54.550 5999.065 - 6024.271: 0.2913% ( 2) 00:07:54.550 6024.271 - 6049.477: 0.3178% ( 2) 00:07:54.550 6049.477 - 6074.683: 0.3443% ( 2) 00:07:54.550 6074.683 - 6099.889: 0.3708% ( 2) 00:07:54.550 6099.889 - 6125.095: 0.4105% ( 3) 00:07:54.550 6125.095 - 6150.302: 0.4370% ( 2) 00:07:54.550 6150.302 - 6175.508: 0.4635% ( 2) 00:07:54.550 6175.508 - 6200.714: 0.4899% ( 2) 00:07:54.550 6200.714 - 6225.920: 0.5164% ( 2) 00:07:54.550 6225.920 - 6251.126: 0.5429% ( 2) 00:07:54.550 6251.126 - 6276.332: 0.5694% ( 2) 00:07:54.550 6276.332 - 6301.538: 0.5959% ( 2) 00:07:54.550 6301.538 - 6326.745: 0.6224% ( 2) 00:07:54.550 6326.745 - 6351.951: 0.6488% ( 2) 00:07:54.550 6351.951 - 6377.157: 0.6753% ( 2) 00:07:54.550 6377.157 - 6402.363: 0.7018% ( 2) 00:07:54.550 6402.363 - 6427.569: 0.7283% ( 2) 00:07:54.550 6427.569 - 6452.775: 0.7415% ( 1) 00:07:54.550 6452.775 - 6503.188: 0.7812% ( 3) 00:07:54.550 6503.188 - 6553.600: 0.8342% ( 4) 00:07:54.550 6553.600 - 6604.012: 0.8475% ( 1) 00:07:54.550 11040.295 - 11090.708: 0.8607% ( 1) 00:07:54.550 11090.708 - 11141.120: 0.8872% ( 2) 00:07:54.550 11141.120 - 11191.532: 0.9666% ( 6) 00:07:54.550 11191.532 - 11241.945: 1.0064% ( 3) 00:07:54.550 11241.945 - 11292.357: 1.0461% ( 3) 00:07:54.550 11292.357 - 11342.769: 1.0593% ( 1) 00:07:54.550 11342.769 - 11393.182: 1.0990% ( 3) 00:07:54.550 11393.182 - 11443.594: 1.1255% ( 2) 00:07:54.550 11443.594 - 11494.006: 1.1520% ( 2) 00:07:54.550 11494.006 - 11544.418: 1.1785% ( 2) 00:07:54.550 11544.418 - 11594.831: 1.2182% ( 3) 00:07:54.550 11594.831 - 11645.243: 1.2447% ( 2) 00:07:54.550 11645.243 - 11695.655: 1.2844% ( 3) 00:07:54.550 11695.655 - 11746.068: 1.3109% ( 2) 00:07:54.550 11746.068 - 11796.480: 1.3506% ( 3) 00:07:54.550 11796.480 - 11846.892: 1.3904% ( 3) 00:07:54.550 11846.892 - 11897.305: 1.4168% ( 2) 00:07:54.550 11897.305 - 11947.717: 1.4566% ( 3) 00:07:54.550 11947.717 - 11998.129: 1.4831% ( 2) 00:07:54.550 11998.129 - 12048.542: 1.5228% ( 3) 00:07:54.550 12048.542 - 12098.954: 1.5625% ( 3) 00:07:54.550 12098.954 - 12149.366: 1.5890% ( 2) 00:07:54.550 12149.366 - 12199.778: 1.6287% ( 3) 00:07:54.550 12199.778 - 12250.191: 1.6552% ( 2) 00:07:54.550 12250.191 - 12300.603: 1.6949% ( 3) 00:07:54.550 12502.252 - 12552.665: 1.7214% ( 2) 00:07:54.550 12552.665 - 12603.077: 1.7479% ( 2) 00:07:54.550 12603.077 - 12653.489: 1.7876% ( 3) 00:07:54.550 12653.489 - 12703.902: 1.8008% ( 1) 00:07:54.550 12703.902 - 12754.314: 1.8141% ( 1) 00:07:54.550 12754.314 - 12804.726: 1.8538% ( 3) 00:07:54.550 12804.726 - 12855.138: 1.8803% ( 2) 00:07:54.550 12855.138 - 12905.551: 1.9068% ( 2) 00:07:54.550 12905.551 - 13006.375: 1.9465% ( 3) 00:07:54.550 13006.375 - 13107.200: 1.9995% ( 4) 00:07:54.550 13107.200 - 13208.025: 2.0657% ( 5) 00:07:54.550 13208.025 - 13308.849: 2.1716% ( 8) 00:07:54.550 13308.849 - 13409.674: 2.3040% ( 10) 00:07:54.550 13409.674 - 13510.498: 2.4232% ( 9) 00:07:54.550 13510.498 - 13611.323: 2.5821% ( 12) 00:07:54.550 13611.323 - 13712.148: 3.0191% ( 33) 00:07:54.550 13712.148 - 13812.972: 3.5885% ( 43) 00:07:54.550 13812.972 - 13913.797: 4.1976% ( 46) 00:07:54.550 13913.797 - 14014.622: 4.8729% ( 51) 00:07:54.550 14014.622 - 14115.446: 5.6541% ( 59) 00:07:54.550 14115.446 - 14216.271: 6.6605% ( 76) 00:07:54.550 14216.271 - 14317.095: 8.0508% ( 105) 00:07:54.550 14317.095 - 14417.920: 9.5604% ( 114) 00:07:54.550 14417.920 - 14518.745: 10.8183% ( 95) 00:07:54.550 14518.745 - 14619.569: 11.9968% ( 89) 00:07:54.550 14619.569 - 14720.394: 13.3739% ( 104) 00:07:54.550 14720.394 - 14821.218: 14.8305% ( 110) 00:07:54.550 14821.218 - 14922.043: 16.2209% ( 105) 00:07:54.550 14922.043 - 15022.868: 17.8496% ( 123) 00:07:54.550 15022.868 - 15123.692: 19.5710% ( 130) 00:07:54.550 15123.692 - 15224.517: 21.7956% ( 168) 00:07:54.550 15224.517 - 15325.342: 23.9539% ( 163) 00:07:54.550 15325.342 - 15426.166: 26.3109% ( 178) 00:07:54.550 15426.166 - 15526.991: 28.6679% ( 178) 00:07:54.550 15526.991 - 15627.815: 31.0381% ( 179) 00:07:54.550 15627.815 - 15728.640: 33.5805% ( 192) 00:07:54.550 15728.640 - 15829.465: 36.0434% ( 186) 00:07:54.550 15829.465 - 15930.289: 38.1886% ( 162) 00:07:54.550 15930.289 - 16031.114: 40.2675% ( 157) 00:07:54.550 16031.114 - 16131.938: 42.2272% ( 148) 00:07:54.550 16131.938 - 16232.763: 43.9089% ( 127) 00:07:54.550 16232.763 - 16333.588: 45.7627% ( 140) 00:07:54.550 16333.588 - 16434.412: 47.2458% ( 112) 00:07:54.550 16434.412 - 16535.237: 48.7553% ( 114) 00:07:54.550 16535.237 - 16636.062: 50.2516% ( 113) 00:07:54.550 16636.062 - 16736.886: 51.9730% ( 130) 00:07:54.550 16736.886 - 16837.711: 53.6547% ( 127) 00:07:54.550 16837.711 - 16938.535: 55.3761% ( 130) 00:07:54.550 16938.535 - 17039.360: 57.2696% ( 143) 00:07:54.550 17039.360 - 17140.185: 59.0969% ( 138) 00:07:54.550 17140.185 - 17241.009: 60.7654% ( 126) 00:07:54.550 17241.009 - 17341.834: 62.4338% ( 126) 00:07:54.550 17341.834 - 17442.658: 64.0757% ( 124) 00:07:54.550 17442.658 - 17543.483: 65.7044% ( 123) 00:07:54.550 17543.483 - 17644.308: 67.2272% ( 115) 00:07:54.550 17644.308 - 17745.132: 68.7235% ( 113) 00:07:54.550 17745.132 - 17845.957: 70.2728% ( 117) 00:07:54.550 17845.957 - 17946.782: 71.7691% ( 113) 00:07:54.550 17946.782 - 18047.606: 73.3051% ( 116) 00:07:54.550 18047.606 - 18148.431: 74.8146% ( 114) 00:07:54.550 18148.431 - 18249.255: 76.3109% ( 113) 00:07:54.550 18249.255 - 18350.080: 77.8072% ( 113) 00:07:54.550 18350.080 - 18450.905: 79.1711% ( 103) 00:07:54.550 18450.905 - 18551.729: 80.6674% ( 113) 00:07:54.550 18551.729 - 18652.554: 82.0975% ( 108) 00:07:54.551 18652.554 - 18753.378: 83.0773% ( 74) 00:07:54.551 18753.378 - 18854.203: 84.2029% ( 85) 00:07:54.551 18854.203 - 18955.028: 85.4343% ( 93) 00:07:54.551 18955.028 - 19055.852: 86.4142% ( 74) 00:07:54.551 19055.852 - 19156.677: 87.2484% ( 63) 00:07:54.551 19156.677 - 19257.502: 88.1091% ( 65) 00:07:54.551 19257.502 - 19358.326: 88.9698% ( 65) 00:07:54.551 19358.326 - 19459.151: 89.7775% ( 61) 00:07:54.551 19459.151 - 19559.975: 90.6250% ( 64) 00:07:54.551 19559.975 - 19660.800: 91.1679% ( 41) 00:07:54.551 19660.800 - 19761.625: 91.6843% ( 39) 00:07:54.551 19761.625 - 19862.449: 92.1213% ( 33) 00:07:54.551 19862.449 - 19963.274: 92.6245% ( 38) 00:07:54.551 19963.274 - 20064.098: 93.1409% ( 39) 00:07:54.551 20064.098 - 20164.923: 93.6176% ( 36) 00:07:54.551 20164.923 - 20265.748: 94.0810% ( 35) 00:07:54.551 20265.748 - 20366.572: 94.4121% ( 25) 00:07:54.551 20366.572 - 20467.397: 94.8358% ( 32) 00:07:54.551 20467.397 - 20568.222: 95.2463% ( 31) 00:07:54.551 20568.222 - 20669.046: 95.6435% ( 30) 00:07:54.551 20669.046 - 20769.871: 95.9878% ( 26) 00:07:54.551 20769.871 - 20870.695: 96.3189% ( 25) 00:07:54.551 20870.695 - 20971.520: 96.7426% ( 32) 00:07:54.551 20971.520 - 21072.345: 97.1266% ( 29) 00:07:54.551 21072.345 - 21173.169: 97.3914% ( 20) 00:07:54.551 21173.169 - 21273.994: 97.5238% ( 10) 00:07:54.551 21273.994 - 21374.818: 97.6562% ( 10) 00:07:54.551 21374.818 - 21475.643: 97.7754% ( 9) 00:07:54.551 21475.643 - 21576.468: 97.9343% ( 12) 00:07:54.551 21576.468 - 21677.292: 98.0270% ( 7) 00:07:54.551 21677.292 - 21778.117: 98.1197% ( 7) 00:07:54.551 21778.117 - 21878.942: 98.1992% ( 6) 00:07:54.551 21878.942 - 21979.766: 98.2786% ( 6) 00:07:54.551 21979.766 - 22080.591: 98.3051% ( 2) 00:07:54.551 28432.542 - 28634.191: 98.3978% ( 7) 00:07:54.551 28634.191 - 28835.840: 98.5567% ( 12) 00:07:54.551 28835.840 - 29037.489: 98.7288% ( 13) 00:07:54.551 29037.489 - 29239.138: 98.8877% ( 12) 00:07:54.551 29239.138 - 29440.788: 98.9804% ( 7) 00:07:54.551 29440.788 - 29642.437: 99.1393% ( 12) 00:07:54.551 29642.437 - 29844.086: 99.1525% ( 1) 00:07:54.551 36095.212 - 36296.862: 99.1790% ( 2) 00:07:54.551 36296.862 - 36498.511: 99.3247% ( 11) 00:07:54.551 36498.511 - 36700.160: 99.4836% ( 12) 00:07:54.551 36700.160 - 36901.809: 99.6292% ( 11) 00:07:54.551 36901.809 - 37103.458: 99.7881% ( 12) 00:07:54.551 37103.458 - 37305.108: 99.9603% ( 13) 00:07:54.551 37305.108 - 37506.757: 100.0000% ( 3) 00:07:54.551 00:07:54.551 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.551 ============================================================================== 00:07:54.551 Range in us Cumulative IO count 00:07:54.551 4839.582 - 4864.788: 0.0131% ( 1) 00:07:54.551 4864.788 - 4889.994: 0.0394% ( 2) 00:07:54.551 4889.994 - 4915.200: 0.0788% ( 3) 00:07:54.551 4915.200 - 4940.406: 0.1050% ( 2) 00:07:54.551 4940.406 - 4965.612: 0.1313% ( 2) 00:07:54.551 4965.612 - 4990.818: 0.1576% ( 2) 00:07:54.551 4990.818 - 5016.025: 0.1970% ( 3) 00:07:54.551 5016.025 - 5041.231: 0.2232% ( 2) 00:07:54.551 5041.231 - 5066.437: 0.2889% ( 5) 00:07:54.551 5066.437 - 5091.643: 0.3414% ( 4) 00:07:54.551 5091.643 - 5116.849: 0.3808% ( 3) 00:07:54.551 5116.849 - 5142.055: 0.4070% ( 2) 00:07:54.551 5142.055 - 5167.262: 0.4333% ( 2) 00:07:54.551 5167.262 - 5192.468: 0.4596% ( 2) 00:07:54.551 5192.468 - 5217.674: 0.4858% ( 2) 00:07:54.551 5217.674 - 5242.880: 0.5121% ( 2) 00:07:54.551 5242.880 - 5268.086: 0.5252% ( 1) 00:07:54.551 5268.086 - 5293.292: 0.5515% ( 2) 00:07:54.551 5293.292 - 5318.498: 0.5909% ( 3) 00:07:54.551 5318.498 - 5343.705: 0.6171% ( 2) 00:07:54.551 5343.705 - 5368.911: 0.6434% ( 2) 00:07:54.551 5368.911 - 5394.117: 0.6696% ( 2) 00:07:54.551 5394.117 - 5419.323: 0.7090% ( 3) 00:07:54.551 5419.323 - 5444.529: 0.7353% ( 2) 00:07:54.551 5444.529 - 5469.735: 0.7616% ( 2) 00:07:54.551 5469.735 - 5494.942: 0.7747% ( 1) 00:07:54.551 5494.942 - 5520.148: 0.8009% ( 2) 00:07:54.551 5520.148 - 5545.354: 0.8272% ( 2) 00:07:54.551 5545.354 - 5570.560: 0.8403% ( 1) 00:07:54.551 10737.822 - 10788.234: 0.8535% ( 1) 00:07:54.551 10788.234 - 10838.646: 0.9191% ( 5) 00:07:54.551 10838.646 - 10889.058: 0.9716% ( 4) 00:07:54.551 10889.058 - 10939.471: 1.0504% ( 6) 00:07:54.551 10939.471 - 10989.883: 1.1029% ( 4) 00:07:54.551 10989.883 - 11040.295: 1.1686% ( 5) 00:07:54.551 11040.295 - 11090.708: 1.2211% ( 4) 00:07:54.551 11090.708 - 11141.120: 1.2736% ( 4) 00:07:54.551 11141.120 - 11191.532: 1.3393% ( 5) 00:07:54.551 11191.532 - 11241.945: 1.4049% ( 5) 00:07:54.551 11241.945 - 11292.357: 1.4575% ( 4) 00:07:54.551 11292.357 - 11342.769: 1.5231% ( 5) 00:07:54.551 11342.769 - 11393.182: 1.5756% ( 4) 00:07:54.551 11393.182 - 11443.594: 1.6413% ( 5) 00:07:54.551 11443.594 - 11494.006: 1.6807% ( 3) 00:07:54.551 12401.428 - 12451.840: 1.7332% ( 4) 00:07:54.551 12451.840 - 12502.252: 1.7988% ( 5) 00:07:54.551 12502.252 - 12552.665: 1.8645% ( 5) 00:07:54.551 12552.665 - 12603.077: 1.8908% ( 2) 00:07:54.551 12603.077 - 12653.489: 1.9170% ( 2) 00:07:54.551 12653.489 - 12703.902: 1.9301% ( 1) 00:07:54.551 12703.902 - 12754.314: 1.9958% ( 5) 00:07:54.551 12754.314 - 12804.726: 2.0614% ( 5) 00:07:54.551 12804.726 - 12855.138: 2.1140% ( 4) 00:07:54.551 12855.138 - 12905.551: 2.1402% ( 2) 00:07:54.551 12905.551 - 13006.375: 2.2190% ( 6) 00:07:54.551 13006.375 - 13107.200: 2.3372% ( 9) 00:07:54.551 13107.200 - 13208.025: 2.4422% ( 8) 00:07:54.551 13208.025 - 13308.849: 2.6261% ( 14) 00:07:54.551 13308.849 - 13409.674: 2.8493% ( 17) 00:07:54.551 13409.674 - 13510.498: 3.0856% ( 18) 00:07:54.551 13510.498 - 13611.323: 3.3220% ( 18) 00:07:54.551 13611.323 - 13712.148: 3.8340% ( 39) 00:07:54.551 13712.148 - 13812.972: 4.2673% ( 33) 00:07:54.551 13812.972 - 13913.797: 4.8451% ( 44) 00:07:54.551 13913.797 - 14014.622: 5.5147% ( 51) 00:07:54.551 14014.622 - 14115.446: 6.4207% ( 69) 00:07:54.551 14115.446 - 14216.271: 7.3661% ( 72) 00:07:54.551 14216.271 - 14317.095: 8.4821% ( 85) 00:07:54.551 14317.095 - 14417.920: 9.8346% ( 103) 00:07:54.551 14417.920 - 14518.745: 11.2132% ( 105) 00:07:54.551 14518.745 - 14619.569: 12.8414% ( 124) 00:07:54.551 14619.569 - 14720.394: 14.1150% ( 97) 00:07:54.551 14720.394 - 14821.218: 15.5593% ( 110) 00:07:54.551 14821.218 - 14922.043: 17.5420% ( 151) 00:07:54.551 14922.043 - 15022.868: 19.4984% ( 149) 00:07:54.551 15022.868 - 15123.692: 21.7568% ( 172) 00:07:54.551 15123.692 - 15224.517: 24.0021% ( 171) 00:07:54.551 15224.517 - 15325.342: 26.0767% ( 158) 00:07:54.551 15325.342 - 15426.166: 28.1775% ( 160) 00:07:54.551 15426.166 - 15526.991: 30.4491% ( 173) 00:07:54.551 15526.991 - 15627.815: 32.6287% ( 166) 00:07:54.551 15627.815 - 15728.640: 34.7689% ( 163) 00:07:54.551 15728.640 - 15829.465: 36.9223% ( 164) 00:07:54.551 15829.465 - 15930.289: 38.9837% ( 157) 00:07:54.551 15930.289 - 16031.114: 41.0058% ( 154) 00:07:54.551 16031.114 - 16131.938: 42.9491% ( 148) 00:07:54.551 16131.938 - 16232.763: 44.7479% ( 137) 00:07:54.551 16232.763 - 16333.588: 46.6255% ( 143) 00:07:54.551 16333.588 - 16434.412: 48.4375% ( 138) 00:07:54.551 16434.412 - 16535.237: 50.1707% ( 132) 00:07:54.551 16535.237 - 16636.062: 51.8776% ( 130) 00:07:54.552 16636.062 - 16736.886: 53.5058% ( 124) 00:07:54.552 16736.886 - 16837.711: 55.1864% ( 128) 00:07:54.552 16837.711 - 16938.535: 56.6833% ( 114) 00:07:54.552 16938.535 - 17039.360: 58.1670% ( 113) 00:07:54.552 17039.360 - 17140.185: 59.3750% ( 92) 00:07:54.552 17140.185 - 17241.009: 60.5436% ( 89) 00:07:54.552 17241.009 - 17341.834: 61.7778% ( 94) 00:07:54.552 17341.834 - 17442.658: 63.0909% ( 100) 00:07:54.552 17442.658 - 17543.483: 64.4170% ( 101) 00:07:54.552 17543.483 - 17644.308: 65.9795% ( 119) 00:07:54.552 17644.308 - 17745.132: 67.3451% ( 104) 00:07:54.552 17745.132 - 17845.957: 68.6975% ( 103) 00:07:54.552 17845.957 - 17946.782: 70.0236% ( 101) 00:07:54.552 17946.782 - 18047.606: 71.4942% ( 112) 00:07:54.552 18047.606 - 18148.431: 73.0042% ( 115) 00:07:54.552 18148.431 - 18249.255: 74.4223% ( 108) 00:07:54.552 18249.255 - 18350.080: 75.9585% ( 117) 00:07:54.552 18350.080 - 18450.905: 77.4947% ( 117) 00:07:54.552 18450.905 - 18551.729: 79.0835% ( 121) 00:07:54.552 18551.729 - 18652.554: 80.4491% ( 104) 00:07:54.552 18652.554 - 18753.378: 81.8934% ( 110) 00:07:54.552 18753.378 - 18854.203: 83.2195% ( 101) 00:07:54.552 18854.203 - 18955.028: 84.3881% ( 89) 00:07:54.552 18955.028 - 19055.852: 85.5173% ( 86) 00:07:54.552 19055.852 - 19156.677: 86.5284% ( 77) 00:07:54.552 19156.677 - 19257.502: 87.3950% ( 66) 00:07:54.552 19257.502 - 19358.326: 88.2222% ( 63) 00:07:54.552 19358.326 - 19459.151: 89.0231% ( 61) 00:07:54.552 19459.151 - 19559.975: 89.7847% ( 58) 00:07:54.552 19559.975 - 19660.800: 90.5987% ( 62) 00:07:54.552 19660.800 - 19761.625: 91.3603% ( 58) 00:07:54.552 19761.625 - 19862.449: 92.2269% ( 66) 00:07:54.552 19862.449 - 19963.274: 93.0147% ( 60) 00:07:54.552 19963.274 - 20064.098: 93.7106% ( 53) 00:07:54.552 20064.098 - 20164.923: 94.4590% ( 57) 00:07:54.552 20164.923 - 20265.748: 95.1287% ( 51) 00:07:54.552 20265.748 - 20366.572: 95.7195% ( 45) 00:07:54.552 20366.572 - 20467.397: 96.2710% ( 42) 00:07:54.552 20467.397 - 20568.222: 96.7568% ( 37) 00:07:54.552 20568.222 - 20669.046: 97.1639% ( 31) 00:07:54.552 20669.046 - 20769.871: 97.5053% ( 26) 00:07:54.552 20769.871 - 20870.695: 97.7679% ( 20) 00:07:54.552 20870.695 - 20971.520: 97.9911% ( 17) 00:07:54.552 20971.520 - 21072.345: 98.1486% ( 12) 00:07:54.552 21072.345 - 21173.169: 98.2405% ( 7) 00:07:54.552 21173.169 - 21273.994: 98.2931% ( 4) 00:07:54.552 21273.994 - 21374.818: 98.3193% ( 2) 00:07:54.552 22282.240 - 22383.065: 98.3718% ( 4) 00:07:54.552 22383.065 - 22483.889: 98.4506% ( 6) 00:07:54.552 22483.889 - 22584.714: 98.5294% ( 6) 00:07:54.552 22584.714 - 22685.538: 98.6082% ( 6) 00:07:54.552 22685.538 - 22786.363: 98.6870% ( 6) 00:07:54.552 22786.363 - 22887.188: 98.7789% ( 7) 00:07:54.552 22887.188 - 22988.012: 98.8577% ( 6) 00:07:54.552 22988.012 - 23088.837: 98.9364% ( 6) 00:07:54.552 23088.837 - 23189.662: 99.0152% ( 6) 00:07:54.552 23189.662 - 23290.486: 99.0940% ( 6) 00:07:54.552 23290.486 - 23391.311: 99.1597% ( 5) 00:07:54.552 28634.191 - 28835.840: 99.1728% ( 1) 00:07:54.552 28835.840 - 29037.489: 99.3304% ( 12) 00:07:54.552 29037.489 - 29239.138: 99.4879% ( 12) 00:07:54.552 29239.138 - 29440.788: 99.6455% ( 12) 00:07:54.552 29440.788 - 29642.437: 99.8030% ( 12) 00:07:54.552 29642.437 - 29844.086: 99.9475% ( 11) 00:07:54.552 29844.086 - 30045.735: 100.0000% ( 4) 00:07:54.552 00:07:54.552 10:39:54 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:55.498 Initializing NVMe Controllers 00:07:55.498 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:55.498 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:55.498 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:55.498 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:55.498 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:55.498 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:55.498 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:55.498 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:55.498 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:55.498 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:55.498 Initialization complete. Launching workers. 00:07:55.498 ======================================================== 00:07:55.498 Latency(us) 00:07:55.498 Device Information : IOPS MiB/s Average min max 00:07:55.498 PCIE (0000:00:13.0) NSID 1 from core 0: 7947.38 93.13 16124.29 10998.72 37059.53 00:07:55.498 PCIE (0000:00:10.0) NSID 1 from core 0: 7947.38 93.13 16113.25 10136.89 37192.50 00:07:55.498 PCIE (0000:00:11.0) NSID 1 from core 0: 7947.38 93.13 16096.41 9786.05 36645.94 00:07:55.498 PCIE (0000:00:12.0) NSID 1 from core 0: 7947.38 93.13 16080.76 7690.03 37292.00 00:07:55.498 PCIE (0000:00:12.0) NSID 2 from core 0: 7947.38 93.13 16064.84 6667.84 36999.30 00:07:55.498 PCIE (0000:00:12.0) NSID 3 from core 0: 8010.96 93.88 15920.83 5991.12 28289.94 00:07:55.498 ======================================================== 00:07:55.498 Total : 47747.86 559.55 16066.53 5991.12 37292.00 00:07:55.498 00:07:55.498 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:55.498 ================================================================================= 00:07:55.498 1.00000% : 12905.551us 00:07:55.498 10.00000% : 14014.622us 00:07:55.498 25.00000% : 14720.394us 00:07:55.498 50.00000% : 15829.465us 00:07:55.498 75.00000% : 16938.535us 00:07:55.498 90.00000% : 18551.729us 00:07:55.498 95.00000% : 19559.975us 00:07:55.498 98.00000% : 20669.046us 00:07:55.498 99.00000% : 26819.348us 00:07:55.498 99.50000% : 36095.212us 00:07:55.498 99.90000% : 36901.809us 00:07:55.498 99.99000% : 37103.458us 00:07:55.498 99.99900% : 37103.458us 00:07:55.498 99.99990% : 37103.458us 00:07:55.498 99.99999% : 37103.458us 00:07:55.498 00:07:55.498 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:55.498 ================================================================================= 00:07:55.498 1.00000% : 12754.314us 00:07:55.498 10.00000% : 13913.797us 00:07:55.498 25.00000% : 14720.394us 00:07:55.498 50.00000% : 15829.465us 00:07:55.498 75.00000% : 16938.535us 00:07:55.498 90.00000% : 18652.554us 00:07:55.498 95.00000% : 19459.151us 00:07:55.498 98.00000% : 20467.397us 00:07:55.498 99.00000% : 26617.698us 00:07:55.498 99.50000% : 36296.862us 00:07:55.498 99.90000% : 37103.458us 00:07:55.498 99.99000% : 37305.108us 00:07:55.498 99.99900% : 37305.108us 00:07:55.498 99.99990% : 37305.108us 00:07:55.498 99.99999% : 37305.108us 00:07:55.498 00:07:55.498 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:55.498 ================================================================================= 00:07:55.498 1.00000% : 13006.375us 00:07:55.498 10.00000% : 14014.622us 00:07:55.498 25.00000% : 14720.394us 00:07:55.498 50.00000% : 15829.465us 00:07:55.498 75.00000% : 16938.535us 00:07:55.498 90.00000% : 18551.729us 00:07:55.498 95.00000% : 19459.151us 00:07:55.498 98.00000% : 20870.695us 00:07:55.498 99.00000% : 26214.400us 00:07:55.498 99.50000% : 35691.914us 00:07:55.498 99.90000% : 36498.511us 00:07:55.498 99.99000% : 36700.160us 00:07:55.498 99.99900% : 36700.160us 00:07:55.498 99.99990% : 36700.160us 00:07:55.498 99.99999% : 36700.160us 00:07:55.498 00:07:55.498 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:55.498 ================================================================================= 00:07:55.498 1.00000% : 12804.726us 00:07:55.498 10.00000% : 13913.797us 00:07:55.498 25.00000% : 14720.394us 00:07:55.498 50.00000% : 15728.640us 00:07:55.498 75.00000% : 17039.360us 00:07:55.498 90.00000% : 18551.729us 00:07:55.498 95.00000% : 19761.625us 00:07:55.498 98.00000% : 20467.397us 00:07:55.498 99.00000% : 27020.997us 00:07:55.498 99.50000% : 36498.511us 00:07:55.498 99.90000% : 37103.458us 00:07:55.498 99.99000% : 37305.108us 00:07:55.498 99.99900% : 37305.108us 00:07:55.498 99.99990% : 37305.108us 00:07:55.498 99.99999% : 37305.108us 00:07:55.498 00:07:55.498 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:55.498 ================================================================================= 00:07:55.498 1.00000% : 12653.489us 00:07:55.498 10.00000% : 13812.972us 00:07:55.498 25.00000% : 14720.394us 00:07:55.498 50.00000% : 15728.640us 00:07:55.498 75.00000% : 17039.360us 00:07:55.498 90.00000% : 18551.729us 00:07:55.498 95.00000% : 19660.800us 00:07:55.498 98.00000% : 20265.748us 00:07:55.498 99.00000% : 27424.295us 00:07:55.498 99.50000% : 36095.212us 00:07:55.498 99.90000% : 36901.809us 00:07:55.498 99.99000% : 37103.458us 00:07:55.498 99.99900% : 37103.458us 00:07:55.498 99.99990% : 37103.458us 00:07:55.498 99.99999% : 37103.458us 00:07:55.498 00:07:55.498 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:55.498 ================================================================================= 00:07:55.498 1.00000% : 12098.954us 00:07:55.498 10.00000% : 14014.622us 00:07:55.498 25.00000% : 14619.569us 00:07:55.498 50.00000% : 15728.640us 00:07:55.498 75.00000% : 17039.360us 00:07:55.498 90.00000% : 18652.554us 00:07:55.498 95.00000% : 19459.151us 00:07:55.498 98.00000% : 20064.098us 00:07:55.498 99.00000% : 20669.046us 00:07:55.498 99.50000% : 27625.945us 00:07:55.498 99.90000% : 28230.892us 00:07:55.498 99.99000% : 28432.542us 00:07:55.498 99.99900% : 28432.542us 00:07:55.498 99.99990% : 28432.542us 00:07:55.498 99.99999% : 28432.542us 00:07:55.498 00:07:55.498 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:55.498 ============================================================================== 00:07:55.498 Range in us Cumulative IO count 00:07:55.498 10989.883 - 11040.295: 0.0500% ( 4) 00:07:55.498 11040.295 - 11090.708: 0.1375% ( 7) 00:07:55.498 11090.708 - 11141.120: 0.2125% ( 6) 00:07:55.498 11141.120 - 11191.532: 0.4375% ( 18) 00:07:55.498 11191.532 - 11241.945: 0.5500% ( 9) 00:07:55.498 11241.945 - 11292.357: 0.5875% ( 3) 00:07:55.498 11292.357 - 11342.769: 0.6250% ( 3) 00:07:55.498 11342.769 - 11393.182: 0.6750% ( 4) 00:07:55.498 11393.182 - 11443.594: 0.7125% ( 3) 00:07:55.498 11443.594 - 11494.006: 0.7625% ( 4) 00:07:55.498 11494.006 - 11544.418: 0.7875% ( 2) 00:07:55.498 11544.418 - 11594.831: 0.8000% ( 1) 00:07:55.498 12703.902 - 12754.314: 0.8125% ( 1) 00:07:55.498 12754.314 - 12804.726: 0.8625% ( 4) 00:07:55.498 12804.726 - 12855.138: 0.9500% ( 7) 00:07:55.498 12855.138 - 12905.551: 1.0125% ( 5) 00:07:55.498 12905.551 - 13006.375: 1.2625% ( 20) 00:07:55.498 13006.375 - 13107.200: 1.9375% ( 54) 00:07:55.498 13107.200 - 13208.025: 2.8500% ( 73) 00:07:55.498 13208.025 - 13308.849: 3.6125% ( 61) 00:07:55.498 13308.849 - 13409.674: 4.6125% ( 80) 00:07:55.498 13409.674 - 13510.498: 5.5375% ( 74) 00:07:55.498 13510.498 - 13611.323: 6.4750% ( 75) 00:07:55.498 13611.323 - 13712.148: 7.2250% ( 60) 00:07:55.498 13712.148 - 13812.972: 8.0500% ( 66) 00:07:55.498 13812.972 - 13913.797: 9.1750% ( 90) 00:07:55.498 13913.797 - 14014.622: 10.7625% ( 127) 00:07:55.498 14014.622 - 14115.446: 12.1375% ( 110) 00:07:55.498 14115.446 - 14216.271: 13.6625% ( 122) 00:07:55.498 14216.271 - 14317.095: 15.8625% ( 176) 00:07:55.498 14317.095 - 14417.920: 18.2375% ( 190) 00:07:55.498 14417.920 - 14518.745: 20.3500% ( 169) 00:07:55.498 14518.745 - 14619.569: 23.6000% ( 260) 00:07:55.498 14619.569 - 14720.394: 26.4125% ( 225) 00:07:55.498 14720.394 - 14821.218: 29.2875% ( 230) 00:07:55.499 14821.218 - 14922.043: 32.3625% ( 246) 00:07:55.499 14922.043 - 15022.868: 34.8375% ( 198) 00:07:55.499 15022.868 - 15123.692: 37.4875% ( 212) 00:07:55.499 15123.692 - 15224.517: 39.8750% ( 191) 00:07:55.499 15224.517 - 15325.342: 42.3125% ( 195) 00:07:55.499 15325.342 - 15426.166: 44.2625% ( 156) 00:07:55.499 15426.166 - 15526.991: 45.7500% ( 119) 00:07:55.499 15526.991 - 15627.815: 47.4500% ( 136) 00:07:55.499 15627.815 - 15728.640: 49.7000% ( 180) 00:07:55.499 15728.640 - 15829.465: 52.2125% ( 201) 00:07:55.499 15829.465 - 15930.289: 54.5875% ( 190) 00:07:55.499 15930.289 - 16031.114: 57.1375% ( 204) 00:07:55.499 16031.114 - 16131.938: 59.9250% ( 223) 00:07:55.499 16131.938 - 16232.763: 62.2125% ( 183) 00:07:55.499 16232.763 - 16333.588: 64.6000% ( 191) 00:07:55.499 16333.588 - 16434.412: 66.7875% ( 175) 00:07:55.499 16434.412 - 16535.237: 68.6125% ( 146) 00:07:55.499 16535.237 - 16636.062: 70.1000% ( 119) 00:07:55.499 16636.062 - 16736.886: 71.7250% ( 130) 00:07:55.499 16736.886 - 16837.711: 73.4375% ( 137) 00:07:55.499 16837.711 - 16938.535: 75.2875% ( 148) 00:07:55.499 16938.535 - 17039.360: 76.5750% ( 103) 00:07:55.499 17039.360 - 17140.185: 77.9625% ( 111) 00:07:55.499 17140.185 - 17241.009: 78.9375% ( 78) 00:07:55.499 17241.009 - 17341.834: 79.7250% ( 63) 00:07:55.499 17341.834 - 17442.658: 80.4875% ( 61) 00:07:55.499 17442.658 - 17543.483: 81.3250% ( 67) 00:07:55.499 17543.483 - 17644.308: 82.2875% ( 77) 00:07:55.499 17644.308 - 17745.132: 83.3500% ( 85) 00:07:55.499 17745.132 - 17845.957: 83.9750% ( 50) 00:07:55.499 17845.957 - 17946.782: 84.5250% ( 44) 00:07:55.499 17946.782 - 18047.606: 85.5250% ( 80) 00:07:55.499 18047.606 - 18148.431: 86.3125% ( 63) 00:07:55.499 18148.431 - 18249.255: 87.6750% ( 109) 00:07:55.499 18249.255 - 18350.080: 88.7250% ( 84) 00:07:55.499 18350.080 - 18450.905: 89.8125% ( 87) 00:07:55.499 18450.905 - 18551.729: 90.7375% ( 74) 00:07:55.499 18551.729 - 18652.554: 91.6500% ( 73) 00:07:55.499 18652.554 - 18753.378: 92.6250% ( 78) 00:07:55.499 18753.378 - 18854.203: 93.1625% ( 43) 00:07:55.499 18854.203 - 18955.028: 93.5875% ( 34) 00:07:55.499 18955.028 - 19055.852: 93.9750% ( 31) 00:07:55.499 19055.852 - 19156.677: 94.4500% ( 38) 00:07:55.499 19156.677 - 19257.502: 94.6625% ( 17) 00:07:55.499 19257.502 - 19358.326: 94.8375% ( 14) 00:07:55.499 19358.326 - 19459.151: 94.9750% ( 11) 00:07:55.499 19459.151 - 19559.975: 95.1125% ( 11) 00:07:55.499 19559.975 - 19660.800: 95.3125% ( 16) 00:07:55.499 19660.800 - 19761.625: 95.5750% ( 21) 00:07:55.499 19761.625 - 19862.449: 95.7625% ( 15) 00:07:55.499 19862.449 - 19963.274: 96.1000% ( 27) 00:07:55.499 19963.274 - 20064.098: 96.5000% ( 32) 00:07:55.499 20064.098 - 20164.923: 96.9250% ( 34) 00:07:55.499 20164.923 - 20265.748: 97.3000% ( 30) 00:07:55.499 20265.748 - 20366.572: 97.5500% ( 20) 00:07:55.499 20366.572 - 20467.397: 97.7500% ( 16) 00:07:55.499 20467.397 - 20568.222: 97.9250% ( 14) 00:07:55.499 20568.222 - 20669.046: 98.0250% ( 8) 00:07:55.499 20669.046 - 20769.871: 98.1500% ( 10) 00:07:55.499 20769.871 - 20870.695: 98.2750% ( 10) 00:07:55.499 20870.695 - 20971.520: 98.3875% ( 9) 00:07:55.499 20971.520 - 21072.345: 98.4000% ( 1) 00:07:55.499 26012.751 - 26214.400: 98.5875% ( 15) 00:07:55.499 26214.400 - 26416.049: 98.7625% ( 14) 00:07:55.499 26416.049 - 26617.698: 98.8875% ( 10) 00:07:55.499 26617.698 - 26819.348: 99.0125% ( 10) 00:07:55.499 26819.348 - 27020.997: 99.1500% ( 11) 00:07:55.499 27020.997 - 27222.646: 99.2000% ( 4) 00:07:55.499 35288.615 - 35490.265: 99.2250% ( 2) 00:07:55.499 35490.265 - 35691.914: 99.2375% ( 1) 00:07:55.499 35691.914 - 35893.563: 99.2500% ( 1) 00:07:55.499 35893.563 - 36095.212: 99.5625% ( 25) 00:07:55.499 36095.212 - 36296.862: 99.6625% ( 8) 00:07:55.499 36296.862 - 36498.511: 99.7625% ( 8) 00:07:55.499 36498.511 - 36700.160: 99.8500% ( 7) 00:07:55.499 36700.160 - 36901.809: 99.9250% ( 6) 00:07:55.499 36901.809 - 37103.458: 100.0000% ( 6) 00:07:55.499 00:07:55.499 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:55.499 ============================================================================== 00:07:55.499 Range in us Cumulative IO count 00:07:55.499 10132.874 - 10183.286: 0.0125% ( 1) 00:07:55.499 10233.698 - 10284.111: 0.0250% ( 1) 00:07:55.499 10284.111 - 10334.523: 0.0375% ( 1) 00:07:55.499 10334.523 - 10384.935: 0.0625% ( 2) 00:07:55.499 10384.935 - 10435.348: 0.1750% ( 9) 00:07:55.499 10435.348 - 10485.760: 0.3375% ( 13) 00:07:55.499 10485.760 - 10536.172: 0.3625% ( 2) 00:07:55.499 10536.172 - 10586.585: 0.4000% ( 3) 00:07:55.499 10586.585 - 10636.997: 0.4625% ( 5) 00:07:55.499 10636.997 - 10687.409: 0.4750% ( 1) 00:07:55.499 10687.409 - 10737.822: 0.4875% ( 1) 00:07:55.499 10737.822 - 10788.234: 0.5000% ( 1) 00:07:55.499 10788.234 - 10838.646: 0.6375% ( 11) 00:07:55.499 10838.646 - 10889.058: 0.6875% ( 4) 00:07:55.499 10989.883 - 11040.295: 0.7000% ( 1) 00:07:55.499 11040.295 - 11090.708: 0.7375% ( 3) 00:07:55.499 11090.708 - 11141.120: 0.7750% ( 3) 00:07:55.499 11141.120 - 11191.532: 0.8000% ( 2) 00:07:55.499 12552.665 - 12603.077: 0.8250% ( 2) 00:07:55.499 12603.077 - 12653.489: 0.8375% ( 1) 00:07:55.499 12653.489 - 12703.902: 0.9250% ( 7) 00:07:55.499 12703.902 - 12754.314: 1.0250% ( 8) 00:07:55.499 12754.314 - 12804.726: 1.0500% ( 2) 00:07:55.499 12804.726 - 12855.138: 1.1750% ( 10) 00:07:55.499 12855.138 - 12905.551: 1.2875% ( 9) 00:07:55.499 12905.551 - 13006.375: 1.9375% ( 52) 00:07:55.499 13006.375 - 13107.200: 2.6000% ( 53) 00:07:55.499 13107.200 - 13208.025: 3.3625% ( 61) 00:07:55.499 13208.025 - 13308.849: 3.9750% ( 49) 00:07:55.499 13308.849 - 13409.674: 4.7500% ( 62) 00:07:55.499 13409.674 - 13510.498: 5.6250% ( 70) 00:07:55.499 13510.498 - 13611.323: 6.5875% ( 77) 00:07:55.499 13611.323 - 13712.148: 7.5375% ( 76) 00:07:55.499 13712.148 - 13812.972: 9.0625% ( 122) 00:07:55.499 13812.972 - 13913.797: 10.7125% ( 132) 00:07:55.499 13913.797 - 14014.622: 12.1750% ( 117) 00:07:55.499 14014.622 - 14115.446: 13.5000% ( 106) 00:07:55.499 14115.446 - 14216.271: 15.0125% ( 121) 00:07:55.499 14216.271 - 14317.095: 17.1500% ( 171) 00:07:55.499 14317.095 - 14417.920: 19.2875% ( 171) 00:07:55.499 14417.920 - 14518.745: 21.5750% ( 183) 00:07:55.499 14518.745 - 14619.569: 24.3250% ( 220) 00:07:55.499 14619.569 - 14720.394: 26.9125% ( 207) 00:07:55.499 14720.394 - 14821.218: 29.1125% ( 176) 00:07:55.499 14821.218 - 14922.043: 31.4750% ( 189) 00:07:55.499 14922.043 - 15022.868: 33.4125% ( 155) 00:07:55.499 15022.868 - 15123.692: 35.4375% ( 162) 00:07:55.499 15123.692 - 15224.517: 37.9000% ( 197) 00:07:55.499 15224.517 - 15325.342: 40.1750% ( 182) 00:07:55.499 15325.342 - 15426.166: 42.3250% ( 172) 00:07:55.499 15426.166 - 15526.991: 44.9000% ( 206) 00:07:55.499 15526.991 - 15627.815: 46.9125% ( 161) 00:07:55.499 15627.815 - 15728.640: 48.6375% ( 138) 00:07:55.499 15728.640 - 15829.465: 51.2250% ( 207) 00:07:55.499 15829.465 - 15930.289: 53.9125% ( 215) 00:07:55.499 15930.289 - 16031.114: 56.7875% ( 230) 00:07:55.499 16031.114 - 16131.938: 59.3500% ( 205) 00:07:55.499 16131.938 - 16232.763: 61.8250% ( 198) 00:07:55.499 16232.763 - 16333.588: 64.4750% ( 212) 00:07:55.499 16333.588 - 16434.412: 66.8750% ( 192) 00:07:55.499 16434.412 - 16535.237: 68.8875% ( 161) 00:07:55.499 16535.237 - 16636.062: 70.7250% ( 147) 00:07:55.499 16636.062 - 16736.886: 72.2750% ( 124) 00:07:55.499 16736.886 - 16837.711: 73.7500% ( 118) 00:07:55.499 16837.711 - 16938.535: 75.2625% ( 121) 00:07:55.499 16938.535 - 17039.360: 76.9875% ( 138) 00:07:55.499 17039.360 - 17140.185: 78.5500% ( 125) 00:07:55.499 17140.185 - 17241.009: 79.4875% ( 75) 00:07:55.499 17241.009 - 17341.834: 80.2625% ( 62) 00:07:55.499 17341.834 - 17442.658: 81.0375% ( 62) 00:07:55.499 17442.658 - 17543.483: 82.0875% ( 84) 00:07:55.499 17543.483 - 17644.308: 82.9750% ( 71) 00:07:55.499 17644.308 - 17745.132: 83.7250% ( 60) 00:07:55.499 17745.132 - 17845.957: 84.2875% ( 45) 00:07:55.499 17845.957 - 17946.782: 85.0250% ( 59) 00:07:55.499 17946.782 - 18047.606: 85.6875% ( 53) 00:07:55.499 18047.606 - 18148.431: 86.4375% ( 60) 00:07:55.499 18148.431 - 18249.255: 87.3125% ( 70) 00:07:55.499 18249.255 - 18350.080: 88.1625% ( 68) 00:07:55.499 18350.080 - 18450.905: 89.0125% ( 68) 00:07:55.499 18450.905 - 18551.729: 89.7750% ( 61) 00:07:55.499 18551.729 - 18652.554: 90.4250% ( 52) 00:07:55.499 18652.554 - 18753.378: 91.1250% ( 56) 00:07:55.499 18753.378 - 18854.203: 91.7875% ( 53) 00:07:55.499 18854.203 - 18955.028: 92.4875% ( 56) 00:07:55.499 18955.028 - 19055.852: 93.0750% ( 47) 00:07:55.499 19055.852 - 19156.677: 93.7125% ( 51) 00:07:55.499 19156.677 - 19257.502: 94.1375% ( 34) 00:07:55.499 19257.502 - 19358.326: 94.6875% ( 44) 00:07:55.499 19358.326 - 19459.151: 95.1000% ( 33) 00:07:55.499 19459.151 - 19559.975: 95.6250% ( 42) 00:07:55.499 19559.975 - 19660.800: 96.0500% ( 34) 00:07:55.499 19660.800 - 19761.625: 96.5375% ( 39) 00:07:55.499 19761.625 - 19862.449: 96.8625% ( 26) 00:07:55.499 19862.449 - 19963.274: 97.1250% ( 21) 00:07:55.499 19963.274 - 20064.098: 97.3875% ( 21) 00:07:55.499 20064.098 - 20164.923: 97.5375% ( 12) 00:07:55.499 20164.923 - 20265.748: 97.7250% ( 15) 00:07:55.499 20265.748 - 20366.572: 97.9125% ( 15) 00:07:55.499 20366.572 - 20467.397: 98.0375% ( 10) 00:07:55.499 20467.397 - 20568.222: 98.1500% ( 9) 00:07:55.499 20568.222 - 20669.046: 98.2250% ( 6) 00:07:55.499 20669.046 - 20769.871: 98.2750% ( 4) 00:07:55.499 20769.871 - 20870.695: 98.3250% ( 4) 00:07:55.499 20870.695 - 20971.520: 98.3750% ( 4) 00:07:55.499 20971.520 - 21072.345: 98.4000% ( 2) 00:07:55.499 25105.329 - 25206.154: 98.4125% ( 1) 00:07:55.499 25206.154 - 25306.978: 98.4250% ( 1) 00:07:55.499 25306.978 - 25407.803: 98.4625% ( 3) 00:07:55.499 25407.803 - 25508.628: 98.4875% ( 2) 00:07:55.499 25508.628 - 25609.452: 98.5250% ( 3) 00:07:55.499 25609.452 - 25710.277: 98.5625% ( 3) 00:07:55.499 25710.277 - 25811.102: 98.5875% ( 2) 00:07:55.499 25811.102 - 26012.751: 98.7250% ( 11) 00:07:55.499 26012.751 - 26214.400: 98.8125% ( 7) 00:07:55.500 26214.400 - 26416.049: 98.9375% ( 10) 00:07:55.500 26416.049 - 26617.698: 99.0500% ( 9) 00:07:55.500 26617.698 - 26819.348: 99.1875% ( 11) 00:07:55.500 26819.348 - 27020.997: 99.2000% ( 1) 00:07:55.500 35288.615 - 35490.265: 99.2500% ( 4) 00:07:55.500 35490.265 - 35691.914: 99.3250% ( 6) 00:07:55.500 35691.914 - 35893.563: 99.4125% ( 7) 00:07:55.500 35893.563 - 36095.212: 99.4750% ( 5) 00:07:55.500 36095.212 - 36296.862: 99.5875% ( 9) 00:07:55.500 36296.862 - 36498.511: 99.6500% ( 5) 00:07:55.500 36498.511 - 36700.160: 99.7500% ( 8) 00:07:55.500 36700.160 - 36901.809: 99.8375% ( 7) 00:07:55.500 36901.809 - 37103.458: 99.9250% ( 7) 00:07:55.500 37103.458 - 37305.108: 100.0000% ( 6) 00:07:55.500 00:07:55.500 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:55.500 ============================================================================== 00:07:55.500 Range in us Cumulative IO count 00:07:55.500 9779.988 - 9830.400: 0.0375% ( 3) 00:07:55.500 9830.400 - 9880.812: 0.0875% ( 4) 00:07:55.500 9880.812 - 9931.225: 0.1625% ( 6) 00:07:55.500 9931.225 - 9981.637: 0.2875% ( 10) 00:07:55.500 9981.637 - 10032.049: 0.5375% ( 20) 00:07:55.500 10032.049 - 10082.462: 0.5750% ( 3) 00:07:55.500 10082.462 - 10132.874: 0.6125% ( 3) 00:07:55.500 10132.874 - 10183.286: 0.6500% ( 3) 00:07:55.500 10183.286 - 10233.698: 0.6875% ( 3) 00:07:55.500 10233.698 - 10284.111: 0.7125% ( 2) 00:07:55.500 10284.111 - 10334.523: 0.7500% ( 3) 00:07:55.500 10334.523 - 10384.935: 0.7875% ( 3) 00:07:55.500 10384.935 - 10435.348: 0.8000% ( 1) 00:07:55.500 12703.902 - 12754.314: 0.8125% ( 1) 00:07:55.500 12855.138 - 12905.551: 0.8750% ( 5) 00:07:55.500 12905.551 - 13006.375: 1.1500% ( 22) 00:07:55.500 13006.375 - 13107.200: 1.7125% ( 45) 00:07:55.500 13107.200 - 13208.025: 2.6500% ( 75) 00:07:55.500 13208.025 - 13308.849: 3.6250% ( 78) 00:07:55.500 13308.849 - 13409.674: 4.5375% ( 73) 00:07:55.500 13409.674 - 13510.498: 5.0875% ( 44) 00:07:55.500 13510.498 - 13611.323: 5.8250% ( 59) 00:07:55.500 13611.323 - 13712.148: 7.0625% ( 99) 00:07:55.500 13712.148 - 13812.972: 8.2000% ( 91) 00:07:55.500 13812.972 - 13913.797: 9.8125% ( 129) 00:07:55.500 13913.797 - 14014.622: 11.2625% ( 116) 00:07:55.500 14014.622 - 14115.446: 12.8250% ( 125) 00:07:55.500 14115.446 - 14216.271: 14.7500% ( 154) 00:07:55.500 14216.271 - 14317.095: 16.4500% ( 136) 00:07:55.500 14317.095 - 14417.920: 18.0625% ( 129) 00:07:55.500 14417.920 - 14518.745: 20.8750% ( 225) 00:07:55.500 14518.745 - 14619.569: 23.6000% ( 218) 00:07:55.500 14619.569 - 14720.394: 26.1875% ( 207) 00:07:55.500 14720.394 - 14821.218: 29.5875% ( 272) 00:07:55.500 14821.218 - 14922.043: 32.3875% ( 224) 00:07:55.500 14922.043 - 15022.868: 34.7375% ( 188) 00:07:55.500 15022.868 - 15123.692: 37.2875% ( 204) 00:07:55.500 15123.692 - 15224.517: 39.4000% ( 169) 00:07:55.500 15224.517 - 15325.342: 41.3750% ( 158) 00:07:55.500 15325.342 - 15426.166: 43.3375% ( 157) 00:07:55.500 15426.166 - 15526.991: 45.3000% ( 157) 00:07:55.500 15526.991 - 15627.815: 47.6625% ( 189) 00:07:55.500 15627.815 - 15728.640: 49.7000% ( 163) 00:07:55.500 15728.640 - 15829.465: 51.8750% ( 174) 00:07:55.500 15829.465 - 15930.289: 53.9000% ( 162) 00:07:55.500 15930.289 - 16031.114: 56.1125% ( 177) 00:07:55.500 16031.114 - 16131.938: 58.5625% ( 196) 00:07:55.500 16131.938 - 16232.763: 61.3125% ( 220) 00:07:55.500 16232.763 - 16333.588: 63.6625% ( 188) 00:07:55.500 16333.588 - 16434.412: 65.6625% ( 160) 00:07:55.500 16434.412 - 16535.237: 68.1000% ( 195) 00:07:55.500 16535.237 - 16636.062: 70.9125% ( 225) 00:07:55.500 16636.062 - 16736.886: 73.0250% ( 169) 00:07:55.500 16736.886 - 16837.711: 74.7375% ( 137) 00:07:55.500 16837.711 - 16938.535: 76.3250% ( 127) 00:07:55.500 16938.535 - 17039.360: 77.7375% ( 113) 00:07:55.500 17039.360 - 17140.185: 79.0000% ( 101) 00:07:55.500 17140.185 - 17241.009: 80.0875% ( 87) 00:07:55.500 17241.009 - 17341.834: 81.3125% ( 98) 00:07:55.500 17341.834 - 17442.658: 82.2750% ( 77) 00:07:55.500 17442.658 - 17543.483: 82.9500% ( 54) 00:07:55.500 17543.483 - 17644.308: 83.6625% ( 57) 00:07:55.500 17644.308 - 17745.132: 84.4500% ( 63) 00:07:55.500 17745.132 - 17845.957: 85.2250% ( 62) 00:07:55.500 17845.957 - 17946.782: 85.9000% ( 54) 00:07:55.500 17946.782 - 18047.606: 86.9625% ( 85) 00:07:55.500 18047.606 - 18148.431: 87.6875% ( 58) 00:07:55.500 18148.431 - 18249.255: 88.5250% ( 67) 00:07:55.500 18249.255 - 18350.080: 89.2125% ( 55) 00:07:55.500 18350.080 - 18450.905: 89.9125% ( 56) 00:07:55.500 18450.905 - 18551.729: 90.4375% ( 42) 00:07:55.500 18551.729 - 18652.554: 90.9500% ( 41) 00:07:55.500 18652.554 - 18753.378: 91.5250% ( 46) 00:07:55.500 18753.378 - 18854.203: 92.1000% ( 46) 00:07:55.500 18854.203 - 18955.028: 92.7250% ( 50) 00:07:55.500 18955.028 - 19055.852: 93.2000% ( 38) 00:07:55.500 19055.852 - 19156.677: 93.8250% ( 50) 00:07:55.500 19156.677 - 19257.502: 94.3750% ( 44) 00:07:55.500 19257.502 - 19358.326: 94.9000% ( 42) 00:07:55.500 19358.326 - 19459.151: 95.3250% ( 34) 00:07:55.500 19459.151 - 19559.975: 95.6250% ( 24) 00:07:55.500 19559.975 - 19660.800: 95.9000% ( 22) 00:07:55.500 19660.800 - 19761.625: 96.1375% ( 19) 00:07:55.500 19761.625 - 19862.449: 96.3375% ( 16) 00:07:55.500 19862.449 - 19963.274: 96.4625% ( 10) 00:07:55.500 19963.274 - 20064.098: 96.5750% ( 9) 00:07:55.500 20064.098 - 20164.923: 96.7000% ( 10) 00:07:55.500 20164.923 - 20265.748: 96.8375% ( 11) 00:07:55.500 20265.748 - 20366.572: 97.0625% ( 18) 00:07:55.500 20366.572 - 20467.397: 97.5125% ( 36) 00:07:55.500 20467.397 - 20568.222: 97.7125% ( 16) 00:07:55.500 20568.222 - 20669.046: 97.8250% ( 9) 00:07:55.500 20669.046 - 20769.871: 97.9375% ( 9) 00:07:55.500 20769.871 - 20870.695: 98.0250% ( 7) 00:07:55.500 20870.695 - 20971.520: 98.1375% ( 9) 00:07:55.500 20971.520 - 21072.345: 98.2125% ( 6) 00:07:55.500 21072.345 - 21173.169: 98.2625% ( 4) 00:07:55.500 21173.169 - 21273.994: 98.3250% ( 5) 00:07:55.500 21273.994 - 21374.818: 98.3625% ( 3) 00:07:55.500 21374.818 - 21475.643: 98.4000% ( 3) 00:07:55.500 25105.329 - 25206.154: 98.4375% ( 3) 00:07:55.500 25206.154 - 25306.978: 98.5000% ( 5) 00:07:55.500 25306.978 - 25407.803: 98.5750% ( 6) 00:07:55.500 25407.803 - 25508.628: 98.6375% ( 5) 00:07:55.500 25508.628 - 25609.452: 98.7125% ( 6) 00:07:55.500 25609.452 - 25710.277: 98.7750% ( 5) 00:07:55.500 25710.277 - 25811.102: 98.8375% ( 5) 00:07:55.500 25811.102 - 26012.751: 98.9750% ( 11) 00:07:55.500 26012.751 - 26214.400: 99.1125% ( 11) 00:07:55.500 26214.400 - 26416.049: 99.2000% ( 7) 00:07:55.500 34885.317 - 35086.966: 99.2250% ( 2) 00:07:55.500 35086.966 - 35288.615: 99.3375% ( 9) 00:07:55.500 35288.615 - 35490.265: 99.4250% ( 7) 00:07:55.500 35490.265 - 35691.914: 99.5125% ( 7) 00:07:55.500 35691.914 - 35893.563: 99.6250% ( 9) 00:07:55.500 35893.563 - 36095.212: 99.7250% ( 8) 00:07:55.500 36095.212 - 36296.862: 99.8375% ( 9) 00:07:55.500 36296.862 - 36498.511: 99.9375% ( 8) 00:07:55.500 36498.511 - 36700.160: 100.0000% ( 5) 00:07:55.500 00:07:55.500 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:55.500 ============================================================================== 00:07:55.500 Range in us Cumulative IO count 00:07:55.500 7662.671 - 7713.083: 0.0250% ( 2) 00:07:55.500 7864.320 - 7914.732: 0.0500% ( 2) 00:07:55.500 7914.732 - 7965.145: 0.1375% ( 7) 00:07:55.500 7965.145 - 8015.557: 0.2375% ( 8) 00:07:55.500 8015.557 - 8065.969: 0.3750% ( 11) 00:07:55.500 8065.969 - 8116.382: 0.5375% ( 13) 00:07:55.500 8116.382 - 8166.794: 0.6000% ( 5) 00:07:55.500 8166.794 - 8217.206: 0.6500% ( 4) 00:07:55.500 8217.206 - 8267.618: 0.6875% ( 3) 00:07:55.500 8267.618 - 8318.031: 0.7375% ( 4) 00:07:55.500 8318.031 - 8368.443: 0.7750% ( 3) 00:07:55.500 8368.443 - 8418.855: 0.8000% ( 2) 00:07:55.500 12653.489 - 12703.902: 0.8375% ( 3) 00:07:55.500 12703.902 - 12754.314: 0.8750% ( 3) 00:07:55.500 12754.314 - 12804.726: 1.0125% ( 11) 00:07:55.500 12804.726 - 12855.138: 1.1500% ( 11) 00:07:55.500 12855.138 - 12905.551: 1.3375% ( 15) 00:07:55.500 12905.551 - 13006.375: 1.8625% ( 42) 00:07:55.500 13006.375 - 13107.200: 2.4500% ( 47) 00:07:55.500 13107.200 - 13208.025: 2.8625% ( 33) 00:07:55.500 13208.025 - 13308.849: 3.5000% ( 51) 00:07:55.500 13308.849 - 13409.674: 4.1625% ( 53) 00:07:55.500 13409.674 - 13510.498: 5.1875% ( 82) 00:07:55.500 13510.498 - 13611.323: 6.3625% ( 94) 00:07:55.500 13611.323 - 13712.148: 7.9125% ( 124) 00:07:55.500 13712.148 - 13812.972: 9.6250% ( 137) 00:07:55.500 13812.972 - 13913.797: 10.9625% ( 107) 00:07:55.500 13913.797 - 14014.622: 12.2625% ( 104) 00:07:55.500 14014.622 - 14115.446: 13.4625% ( 96) 00:07:55.500 14115.446 - 14216.271: 14.6750% ( 97) 00:07:55.500 14216.271 - 14317.095: 16.0000% ( 106) 00:07:55.500 14317.095 - 14417.920: 17.6125% ( 129) 00:07:55.500 14417.920 - 14518.745: 20.1375% ( 202) 00:07:55.500 14518.745 - 14619.569: 22.9625% ( 226) 00:07:55.500 14619.569 - 14720.394: 26.7375% ( 302) 00:07:55.500 14720.394 - 14821.218: 29.8125% ( 246) 00:07:55.500 14821.218 - 14922.043: 33.1625% ( 268) 00:07:55.500 14922.043 - 15022.868: 35.7125% ( 204) 00:07:55.500 15022.868 - 15123.692: 38.3250% ( 209) 00:07:55.500 15123.692 - 15224.517: 40.2125% ( 151) 00:07:55.500 15224.517 - 15325.342: 42.1250% ( 153) 00:07:55.500 15325.342 - 15426.166: 44.3750% ( 180) 00:07:55.500 15426.166 - 15526.991: 45.9875% ( 129) 00:07:55.500 15526.991 - 15627.815: 47.8875% ( 152) 00:07:55.500 15627.815 - 15728.640: 50.1500% ( 181) 00:07:55.500 15728.640 - 15829.465: 52.8875% ( 219) 00:07:55.500 15829.465 - 15930.289: 55.9000% ( 241) 00:07:55.500 15930.289 - 16031.114: 58.6625% ( 221) 00:07:55.500 16031.114 - 16131.938: 61.1875% ( 202) 00:07:55.500 16131.938 - 16232.763: 63.1250% ( 155) 00:07:55.500 16232.763 - 16333.588: 64.8500% ( 138) 00:07:55.500 16333.588 - 16434.412: 66.5625% ( 137) 00:07:55.500 16434.412 - 16535.237: 68.1250% ( 125) 00:07:55.500 16535.237 - 16636.062: 69.4625% ( 107) 00:07:55.501 16636.062 - 16736.886: 70.9250% ( 117) 00:07:55.501 16736.886 - 16837.711: 72.7500% ( 146) 00:07:55.501 16837.711 - 16938.535: 74.6750% ( 154) 00:07:55.501 16938.535 - 17039.360: 76.4875% ( 145) 00:07:55.501 17039.360 - 17140.185: 78.1125% ( 130) 00:07:55.501 17140.185 - 17241.009: 79.3500% ( 99) 00:07:55.501 17241.009 - 17341.834: 80.3000% ( 76) 00:07:55.501 17341.834 - 17442.658: 81.4500% ( 92) 00:07:55.501 17442.658 - 17543.483: 82.6125% ( 93) 00:07:55.501 17543.483 - 17644.308: 83.7625% ( 92) 00:07:55.501 17644.308 - 17745.132: 85.1125% ( 108) 00:07:55.501 17745.132 - 17845.957: 86.2125% ( 88) 00:07:55.501 17845.957 - 17946.782: 87.0625% ( 68) 00:07:55.501 17946.782 - 18047.606: 87.7875% ( 58) 00:07:55.501 18047.606 - 18148.431: 88.3125% ( 42) 00:07:55.501 18148.431 - 18249.255: 88.7500% ( 35) 00:07:55.501 18249.255 - 18350.080: 89.0250% ( 22) 00:07:55.501 18350.080 - 18450.905: 89.4000% ( 30) 00:07:55.501 18450.905 - 18551.729: 90.1250% ( 58) 00:07:55.501 18551.729 - 18652.554: 90.5375% ( 33) 00:07:55.501 18652.554 - 18753.378: 91.1500% ( 49) 00:07:55.501 18753.378 - 18854.203: 91.8500% ( 56) 00:07:55.501 18854.203 - 18955.028: 92.3375% ( 39) 00:07:55.501 18955.028 - 19055.852: 92.8750% ( 43) 00:07:55.501 19055.852 - 19156.677: 93.3125% ( 35) 00:07:55.501 19156.677 - 19257.502: 93.7500% ( 35) 00:07:55.501 19257.502 - 19358.326: 94.1250% ( 30) 00:07:55.501 19358.326 - 19459.151: 94.5875% ( 37) 00:07:55.501 19459.151 - 19559.975: 94.7875% ( 16) 00:07:55.501 19559.975 - 19660.800: 94.9625% ( 14) 00:07:55.501 19660.800 - 19761.625: 95.2375% ( 22) 00:07:55.501 19761.625 - 19862.449: 95.6250% ( 31) 00:07:55.501 19862.449 - 19963.274: 95.9750% ( 28) 00:07:55.501 19963.274 - 20064.098: 96.3250% ( 28) 00:07:55.501 20064.098 - 20164.923: 96.9250% ( 48) 00:07:55.501 20164.923 - 20265.748: 97.3875% ( 37) 00:07:55.501 20265.748 - 20366.572: 97.8125% ( 34) 00:07:55.501 20366.572 - 20467.397: 98.0375% ( 18) 00:07:55.501 20467.397 - 20568.222: 98.2250% ( 15) 00:07:55.501 20568.222 - 20669.046: 98.3750% ( 12) 00:07:55.501 20669.046 - 20769.871: 98.4000% ( 2) 00:07:55.501 26012.751 - 26214.400: 98.5250% ( 10) 00:07:55.501 26214.400 - 26416.049: 98.6625% ( 11) 00:07:55.501 26416.049 - 26617.698: 98.8000% ( 11) 00:07:55.501 26617.698 - 26819.348: 98.9375% ( 11) 00:07:55.501 26819.348 - 27020.997: 99.0750% ( 11) 00:07:55.501 27020.997 - 27222.646: 99.2000% ( 10) 00:07:55.501 35691.914 - 35893.563: 99.2750% ( 6) 00:07:55.501 35893.563 - 36095.212: 99.3750% ( 8) 00:07:55.501 36095.212 - 36296.862: 99.4750% ( 8) 00:07:55.501 36296.862 - 36498.511: 99.5750% ( 8) 00:07:55.501 36498.511 - 36700.160: 99.6875% ( 9) 00:07:55.501 36700.160 - 36901.809: 99.8000% ( 9) 00:07:55.501 36901.809 - 37103.458: 99.9000% ( 8) 00:07:55.501 37103.458 - 37305.108: 100.0000% ( 8) 00:07:55.501 00:07:55.501 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:55.501 ============================================================================== 00:07:55.501 Range in us Cumulative IO count 00:07:55.501 6654.425 - 6704.837: 0.0375% ( 3) 00:07:55.501 6704.837 - 6755.249: 0.1000% ( 5) 00:07:55.501 6755.249 - 6805.662: 0.1750% ( 6) 00:07:55.501 6805.662 - 6856.074: 0.3000% ( 10) 00:07:55.501 6856.074 - 6906.486: 0.4875% ( 15) 00:07:55.501 6906.486 - 6956.898: 0.5750% ( 7) 00:07:55.501 6956.898 - 7007.311: 0.6250% ( 4) 00:07:55.501 7007.311 - 7057.723: 0.6625% ( 3) 00:07:55.501 7057.723 - 7108.135: 0.7000% ( 3) 00:07:55.501 7108.135 - 7158.548: 0.7500% ( 4) 00:07:55.501 7158.548 - 7208.960: 0.7875% ( 3) 00:07:55.501 7208.960 - 7259.372: 0.8000% ( 1) 00:07:55.501 12451.840 - 12502.252: 0.8125% ( 1) 00:07:55.501 12502.252 - 12552.665: 0.8250% ( 1) 00:07:55.501 12552.665 - 12603.077: 0.9125% ( 7) 00:07:55.501 12603.077 - 12653.489: 1.0125% ( 8) 00:07:55.501 12653.489 - 12703.902: 1.1750% ( 13) 00:07:55.501 12703.902 - 12754.314: 1.4125% ( 19) 00:07:55.501 12754.314 - 12804.726: 1.7125% ( 24) 00:07:55.501 12804.726 - 12855.138: 1.9000% ( 15) 00:07:55.501 12855.138 - 12905.551: 2.1625% ( 21) 00:07:55.501 12905.551 - 13006.375: 3.0250% ( 69) 00:07:55.501 13006.375 - 13107.200: 3.5250% ( 40) 00:07:55.501 13107.200 - 13208.025: 4.0125% ( 39) 00:07:55.501 13208.025 - 13308.849: 4.6375% ( 50) 00:07:55.501 13308.849 - 13409.674: 5.5750% ( 75) 00:07:55.501 13409.674 - 13510.498: 6.4875% ( 73) 00:07:55.501 13510.498 - 13611.323: 7.7750% ( 103) 00:07:55.501 13611.323 - 13712.148: 8.9250% ( 92) 00:07:55.501 13712.148 - 13812.972: 10.1000% ( 94) 00:07:55.501 13812.972 - 13913.797: 11.1875% ( 87) 00:07:55.501 13913.797 - 14014.622: 12.1750% ( 79) 00:07:55.501 14014.622 - 14115.446: 13.2875% ( 89) 00:07:55.501 14115.446 - 14216.271: 14.6625% ( 110) 00:07:55.501 14216.271 - 14317.095: 16.1750% ( 121) 00:07:55.501 14317.095 - 14417.920: 18.4000% ( 178) 00:07:55.501 14417.920 - 14518.745: 20.6750% ( 182) 00:07:55.501 14518.745 - 14619.569: 23.8500% ( 254) 00:07:55.501 14619.569 - 14720.394: 26.3125% ( 197) 00:07:55.501 14720.394 - 14821.218: 28.6875% ( 190) 00:07:55.501 14821.218 - 14922.043: 31.4375% ( 220) 00:07:55.501 14922.043 - 15022.868: 34.1875% ( 220) 00:07:55.501 15022.868 - 15123.692: 36.8875% ( 216) 00:07:55.501 15123.692 - 15224.517: 39.6000% ( 217) 00:07:55.501 15224.517 - 15325.342: 42.4750% ( 230) 00:07:55.501 15325.342 - 15426.166: 45.0125% ( 203) 00:07:55.501 15426.166 - 15526.991: 47.0375% ( 162) 00:07:55.501 15526.991 - 15627.815: 49.5375% ( 200) 00:07:55.501 15627.815 - 15728.640: 51.8875% ( 188) 00:07:55.501 15728.640 - 15829.465: 54.3125% ( 194) 00:07:55.501 15829.465 - 15930.289: 56.4625% ( 172) 00:07:55.501 15930.289 - 16031.114: 58.4375% ( 158) 00:07:55.501 16031.114 - 16131.938: 60.8125% ( 190) 00:07:55.501 16131.938 - 16232.763: 63.1000% ( 183) 00:07:55.501 16232.763 - 16333.588: 64.9250% ( 146) 00:07:55.501 16333.588 - 16434.412: 66.9875% ( 165) 00:07:55.501 16434.412 - 16535.237: 68.6250% ( 131) 00:07:55.501 16535.237 - 16636.062: 70.1000% ( 118) 00:07:55.501 16636.062 - 16736.886: 71.4625% ( 109) 00:07:55.501 16736.886 - 16837.711: 72.5875% ( 90) 00:07:55.501 16837.711 - 16938.535: 74.2250% ( 131) 00:07:55.501 16938.535 - 17039.360: 75.4375% ( 97) 00:07:55.501 17039.360 - 17140.185: 76.6500% ( 97) 00:07:55.501 17140.185 - 17241.009: 78.0500% ( 112) 00:07:55.501 17241.009 - 17341.834: 79.4250% ( 110) 00:07:55.501 17341.834 - 17442.658: 81.1375% ( 137) 00:07:55.501 17442.658 - 17543.483: 82.2625% ( 90) 00:07:55.501 17543.483 - 17644.308: 83.5500% ( 103) 00:07:55.501 17644.308 - 17745.132: 84.6500% ( 88) 00:07:55.501 17745.132 - 17845.957: 85.5875% ( 75) 00:07:55.501 17845.957 - 17946.782: 86.4625% ( 70) 00:07:55.501 17946.782 - 18047.606: 87.2750% ( 65) 00:07:55.501 18047.606 - 18148.431: 87.8000% ( 42) 00:07:55.501 18148.431 - 18249.255: 88.3500% ( 44) 00:07:55.501 18249.255 - 18350.080: 88.9125% ( 45) 00:07:55.501 18350.080 - 18450.905: 89.5875% ( 54) 00:07:55.501 18450.905 - 18551.729: 90.0125% ( 34) 00:07:55.501 18551.729 - 18652.554: 90.3500% ( 27) 00:07:55.501 18652.554 - 18753.378: 90.7125% ( 29) 00:07:55.501 18753.378 - 18854.203: 91.1750% ( 37) 00:07:55.501 18854.203 - 18955.028: 91.5625% ( 31) 00:07:55.501 18955.028 - 19055.852: 91.9250% ( 29) 00:07:55.501 19055.852 - 19156.677: 92.2750% ( 28) 00:07:55.501 19156.677 - 19257.502: 92.6875% ( 33) 00:07:55.501 19257.502 - 19358.326: 93.1625% ( 38) 00:07:55.501 19358.326 - 19459.151: 94.0375% ( 70) 00:07:55.501 19459.151 - 19559.975: 94.9250% ( 71) 00:07:55.501 19559.975 - 19660.800: 95.3750% ( 36) 00:07:55.501 19660.800 - 19761.625: 95.9375% ( 45) 00:07:55.501 19761.625 - 19862.449: 96.3875% ( 36) 00:07:55.501 19862.449 - 19963.274: 97.0375% ( 52) 00:07:55.501 19963.274 - 20064.098: 97.4750% ( 35) 00:07:55.501 20064.098 - 20164.923: 97.9375% ( 37) 00:07:55.501 20164.923 - 20265.748: 98.1375% ( 16) 00:07:55.501 20265.748 - 20366.572: 98.2250% ( 7) 00:07:55.501 20366.572 - 20467.397: 98.3125% ( 7) 00:07:55.501 20467.397 - 20568.222: 98.4000% ( 7) 00:07:55.501 26214.400 - 26416.049: 98.4125% ( 1) 00:07:55.501 26416.049 - 26617.698: 98.5375% ( 10) 00:07:55.501 26617.698 - 26819.348: 98.6625% ( 10) 00:07:55.501 26819.348 - 27020.997: 98.7750% ( 9) 00:07:55.501 27020.997 - 27222.646: 98.9125% ( 11) 00:07:55.501 27222.646 - 27424.295: 99.0500% ( 11) 00:07:55.501 27424.295 - 27625.945: 99.1875% ( 11) 00:07:55.501 27625.945 - 27827.594: 99.2000% ( 1) 00:07:55.501 35288.615 - 35490.265: 99.2500% ( 4) 00:07:55.501 35490.265 - 35691.914: 99.3375% ( 7) 00:07:55.501 35691.914 - 35893.563: 99.4375% ( 8) 00:07:55.501 35893.563 - 36095.212: 99.5250% ( 7) 00:07:55.501 36095.212 - 36296.862: 99.6375% ( 9) 00:07:55.501 36296.862 - 36498.511: 99.7250% ( 7) 00:07:55.501 36498.511 - 36700.160: 99.8375% ( 9) 00:07:55.501 36700.160 - 36901.809: 99.9375% ( 8) 00:07:55.501 36901.809 - 37103.458: 100.0000% ( 5) 00:07:55.501 00:07:55.501 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:55.501 ============================================================================== 00:07:55.501 Range in us Cumulative IO count 00:07:55.501 5973.858 - 5999.065: 0.0124% ( 1) 00:07:55.501 5999.065 - 6024.271: 0.0248% ( 1) 00:07:55.501 6024.271 - 6049.477: 0.0620% ( 3) 00:07:55.501 6049.477 - 6074.683: 0.0744% ( 1) 00:07:55.501 6074.683 - 6099.889: 0.0868% ( 1) 00:07:55.501 6099.889 - 6125.095: 0.1240% ( 3) 00:07:55.501 6125.095 - 6150.302: 0.2232% ( 8) 00:07:55.501 6150.302 - 6175.508: 0.3968% ( 14) 00:07:55.501 6175.508 - 6200.714: 0.4464% ( 4) 00:07:55.501 6200.714 - 6225.920: 0.4836% ( 3) 00:07:55.501 6225.920 - 6251.126: 0.4960% ( 1) 00:07:55.501 6251.126 - 6276.332: 0.5208% ( 2) 00:07:55.501 6276.332 - 6301.538: 0.5456% ( 2) 00:07:55.501 6301.538 - 6326.745: 0.5580% ( 1) 00:07:55.501 6326.745 - 6351.951: 0.5828% ( 2) 00:07:55.501 6351.951 - 6377.157: 0.6076% ( 2) 00:07:55.501 6377.157 - 6402.363: 0.6200% ( 1) 00:07:55.501 6402.363 - 6427.569: 0.6324% ( 1) 00:07:55.501 6427.569 - 6452.775: 0.6448% ( 1) 00:07:55.502 6452.775 - 6503.188: 0.6696% ( 2) 00:07:55.502 6503.188 - 6553.600: 0.7068% ( 3) 00:07:55.502 6553.600 - 6604.012: 0.7440% ( 3) 00:07:55.502 6604.012 - 6654.425: 0.7812% ( 3) 00:07:55.502 6654.425 - 6704.837: 0.7937% ( 1) 00:07:55.502 11947.717 - 11998.129: 0.8805% ( 7) 00:07:55.502 11998.129 - 12048.542: 0.9549% ( 6) 00:07:55.502 12048.542 - 12098.954: 1.0665% ( 9) 00:07:55.502 12098.954 - 12149.366: 1.2897% ( 18) 00:07:55.502 12149.366 - 12199.778: 1.3641% ( 6) 00:07:55.502 12199.778 - 12250.191: 1.4013% ( 3) 00:07:55.502 12250.191 - 12300.603: 1.4509% ( 4) 00:07:55.502 12300.603 - 12351.015: 1.4881% ( 3) 00:07:55.502 12351.015 - 12401.428: 1.5501% ( 5) 00:07:55.502 12401.428 - 12451.840: 1.5749% ( 2) 00:07:55.502 12451.840 - 12502.252: 1.5873% ( 1) 00:07:55.502 12653.489 - 12703.902: 1.6245% ( 3) 00:07:55.502 12703.902 - 12754.314: 1.6617% ( 3) 00:07:55.502 12754.314 - 12804.726: 1.6741% ( 1) 00:07:55.502 12804.726 - 12855.138: 1.7485% ( 6) 00:07:55.502 12855.138 - 12905.551: 1.7857% ( 3) 00:07:55.502 12905.551 - 13006.375: 2.2073% ( 34) 00:07:55.502 13006.375 - 13107.200: 2.5546% ( 28) 00:07:55.502 13107.200 - 13208.025: 2.9514% ( 32) 00:07:55.502 13208.025 - 13308.849: 3.5342% ( 47) 00:07:55.502 13308.849 - 13409.674: 4.2163% ( 55) 00:07:55.502 13409.674 - 13510.498: 4.7247% ( 41) 00:07:55.502 13510.498 - 13611.323: 5.7788% ( 85) 00:07:55.502 13611.323 - 13712.148: 6.7336% ( 77) 00:07:55.502 13712.148 - 13812.972: 8.2589% ( 123) 00:07:55.502 13812.972 - 13913.797: 9.8710% ( 130) 00:07:55.502 13913.797 - 14014.622: 11.3839% ( 122) 00:07:55.502 14014.622 - 14115.446: 13.2564% ( 151) 00:07:55.502 14115.446 - 14216.271: 15.8482% ( 209) 00:07:55.502 14216.271 - 14317.095: 18.3408% ( 201) 00:07:55.502 14317.095 - 14417.920: 20.3249% ( 160) 00:07:55.502 14417.920 - 14518.745: 22.9663% ( 213) 00:07:55.502 14518.745 - 14619.569: 25.4836% ( 203) 00:07:55.502 14619.569 - 14720.394: 28.0134% ( 204) 00:07:55.502 14720.394 - 14821.218: 30.3199% ( 186) 00:07:55.502 14821.218 - 14922.043: 32.9241% ( 210) 00:07:55.502 14922.043 - 15022.868: 35.3423% ( 195) 00:07:55.502 15022.868 - 15123.692: 37.6984% ( 190) 00:07:55.502 15123.692 - 15224.517: 40.1166% ( 195) 00:07:55.502 15224.517 - 15325.342: 42.1255% ( 162) 00:07:55.502 15325.342 - 15426.166: 44.1096% ( 160) 00:07:55.502 15426.166 - 15526.991: 46.5526% ( 197) 00:07:55.502 15526.991 - 15627.815: 48.6235% ( 167) 00:07:55.502 15627.815 - 15728.640: 50.7812% ( 174) 00:07:55.502 15728.640 - 15829.465: 53.2366% ( 198) 00:07:55.502 15829.465 - 15930.289: 56.1260% ( 233) 00:07:55.502 15930.289 - 16031.114: 58.9286% ( 226) 00:07:55.502 16031.114 - 16131.938: 61.3343% ( 194) 00:07:55.502 16131.938 - 16232.763: 64.1865% ( 230) 00:07:55.502 16232.763 - 16333.588: 66.6295% ( 197) 00:07:55.502 16333.588 - 16434.412: 68.5020% ( 151) 00:07:55.502 16434.412 - 16535.237: 69.8041% ( 105) 00:07:55.502 16535.237 - 16636.062: 71.0441% ( 100) 00:07:55.502 16636.062 - 16736.886: 72.3090% ( 102) 00:07:55.502 16736.886 - 16837.711: 73.4623% ( 93) 00:07:55.502 16837.711 - 16938.535: 74.8636% ( 113) 00:07:55.502 16938.535 - 17039.360: 75.8557% ( 80) 00:07:55.502 17039.360 - 17140.185: 77.0957% ( 100) 00:07:55.502 17140.185 - 17241.009: 78.0878% ( 80) 00:07:55.502 17241.009 - 17341.834: 79.0675% ( 79) 00:07:55.502 17341.834 - 17442.658: 80.0347% ( 78) 00:07:55.502 17442.658 - 17543.483: 81.3864% ( 109) 00:07:55.502 17543.483 - 17644.308: 82.5645% ( 95) 00:07:55.502 17644.308 - 17745.132: 83.5689% ( 81) 00:07:55.502 17745.132 - 17845.957: 84.4370% ( 70) 00:07:55.502 17845.957 - 17946.782: 85.5655% ( 91) 00:07:55.502 17946.782 - 18047.606: 86.5079% ( 76) 00:07:55.502 18047.606 - 18148.431: 87.2396% ( 59) 00:07:55.502 18148.431 - 18249.255: 87.9340% ( 56) 00:07:55.502 18249.255 - 18350.080: 88.6037% ( 54) 00:07:55.502 18350.080 - 18450.905: 89.1369% ( 43) 00:07:55.502 18450.905 - 18551.729: 89.8189% ( 55) 00:07:55.502 18551.729 - 18652.554: 90.3026% ( 39) 00:07:55.502 18652.554 - 18753.378: 90.8730% ( 46) 00:07:55.502 18753.378 - 18854.203: 91.7411% ( 70) 00:07:55.502 18854.203 - 18955.028: 92.2495% ( 41) 00:07:55.502 18955.028 - 19055.852: 92.8695% ( 50) 00:07:55.502 19055.852 - 19156.677: 93.5640% ( 56) 00:07:55.502 19156.677 - 19257.502: 94.3080% ( 60) 00:07:55.502 19257.502 - 19358.326: 94.9157% ( 49) 00:07:55.502 19358.326 - 19459.151: 95.5109% ( 48) 00:07:55.502 19459.151 - 19559.975: 96.1558% ( 52) 00:07:55.502 19559.975 - 19660.800: 96.6270% ( 38) 00:07:55.502 19660.800 - 19761.625: 97.1478% ( 42) 00:07:55.502 19761.625 - 19862.449: 97.5570% ( 33) 00:07:55.502 19862.449 - 19963.274: 97.9291% ( 30) 00:07:55.502 19963.274 - 20064.098: 98.2391% ( 25) 00:07:55.502 20064.098 - 20164.923: 98.5367% ( 24) 00:07:55.502 20164.923 - 20265.748: 98.6731% ( 11) 00:07:55.502 20265.748 - 20366.572: 98.7847% ( 9) 00:07:55.502 20366.572 - 20467.397: 98.9087% ( 10) 00:07:55.502 20467.397 - 20568.222: 98.9955% ( 7) 00:07:55.502 20568.222 - 20669.046: 99.0823% ( 7) 00:07:55.502 20669.046 - 20769.871: 99.1319% ( 4) 00:07:55.502 20769.871 - 20870.695: 99.1815% ( 4) 00:07:55.502 20870.695 - 20971.520: 99.2063% ( 2) 00:07:55.502 27020.997 - 27222.646: 99.2684% ( 5) 00:07:55.502 27222.646 - 27424.295: 99.4048% ( 11) 00:07:55.502 27424.295 - 27625.945: 99.5412% ( 11) 00:07:55.502 27625.945 - 27827.594: 99.6776% ( 11) 00:07:55.502 27827.594 - 28029.243: 99.8140% ( 11) 00:07:55.502 28029.243 - 28230.892: 99.9504% ( 11) 00:07:55.502 28230.892 - 28432.542: 100.0000% ( 4) 00:07:55.502 00:07:55.502 10:39:55 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:55.502 00:07:55.502 real 0m2.435s 00:07:55.502 user 0m2.137s 00:07:55.502 sys 0m0.183s 00:07:55.502 10:39:55 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.502 10:39:55 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.502 ************************************ 00:07:55.502 END TEST nvme_perf 00:07:55.502 ************************************ 00:07:55.781 10:39:55 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:55.781 10:39:55 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:55.781 10:39:55 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.781 10:39:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.781 ************************************ 00:07:55.781 START TEST nvme_hello_world 00:07:55.781 ************************************ 00:07:55.781 10:39:55 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:55.781 Initializing NVMe Controllers 00:07:55.781 Attached to 0000:00:13.0 00:07:55.781 Namespace ID: 1 size: 1GB 00:07:55.781 Attached to 0000:00:10.0 00:07:55.781 Namespace ID: 1 size: 6GB 00:07:55.781 Attached to 0000:00:11.0 00:07:55.781 Namespace ID: 1 size: 5GB 00:07:55.781 Attached to 0000:00:12.0 00:07:55.781 Namespace ID: 1 size: 4GB 00:07:55.781 Namespace ID: 2 size: 4GB 00:07:55.781 Namespace ID: 3 size: 4GB 00:07:55.781 Initialization complete. 00:07:55.781 INFO: using host memory buffer for IO 00:07:55.781 Hello world! 00:07:55.781 INFO: using host memory buffer for IO 00:07:55.781 Hello world! 00:07:55.781 INFO: using host memory buffer for IO 00:07:55.781 Hello world! 00:07:55.781 INFO: using host memory buffer for IO 00:07:55.781 Hello world! 00:07:55.781 INFO: using host memory buffer for IO 00:07:55.781 Hello world! 00:07:55.781 INFO: using host memory buffer for IO 00:07:55.781 Hello world! 00:07:55.781 ************************************ 00:07:55.781 END TEST nvme_hello_world 00:07:55.781 ************************************ 00:07:55.781 00:07:55.781 real 0m0.195s 00:07:55.781 user 0m0.060s 00:07:55.781 sys 0m0.090s 00:07:55.781 10:39:55 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.781 10:39:55 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:56.055 10:39:55 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:56.055 10:39:55 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.055 10:39:55 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.055 10:39:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.055 ************************************ 00:07:56.055 START TEST nvme_sgl 00:07:56.055 ************************************ 00:07:56.055 10:39:55 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:56.055 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:56.055 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:56.055 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:56.055 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:56.055 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:56.055 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:56.055 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:56.055 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:56.055 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:56.055 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:56.055 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:56.055 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:56.055 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:56.055 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:56.055 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:56.055 NVMe Readv/Writev Request test 00:07:56.055 Attached to 0000:00:13.0 00:07:56.055 Attached to 0000:00:10.0 00:07:56.055 Attached to 0000:00:11.0 00:07:56.055 Attached to 0000:00:12.0 00:07:56.055 0000:00:10.0: build_io_request_2 test passed 00:07:56.055 0000:00:10.0: build_io_request_4 test passed 00:07:56.055 0000:00:10.0: build_io_request_5 test passed 00:07:56.055 0000:00:10.0: build_io_request_6 test passed 00:07:56.056 0000:00:10.0: build_io_request_7 test passed 00:07:56.056 0000:00:10.0: build_io_request_10 test passed 00:07:56.056 0000:00:11.0: build_io_request_2 test passed 00:07:56.056 0000:00:11.0: build_io_request_4 test passed 00:07:56.056 0000:00:11.0: build_io_request_5 test passed 00:07:56.056 0000:00:11.0: build_io_request_6 test passed 00:07:56.056 0000:00:11.0: build_io_request_7 test passed 00:07:56.056 0000:00:11.0: build_io_request_10 test passed 00:07:56.056 Cleaning up... 00:07:56.056 00:07:56.056 real 0m0.249s 00:07:56.056 user 0m0.107s 00:07:56.056 sys 0m0.093s 00:07:56.056 10:39:56 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.056 10:39:56 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:56.056 ************************************ 00:07:56.056 END TEST nvme_sgl 00:07:56.056 ************************************ 00:07:56.317 10:39:56 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:56.317 10:39:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.317 10:39:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.317 10:39:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.317 ************************************ 00:07:56.317 START TEST nvme_e2edp 00:07:56.317 ************************************ 00:07:56.317 10:39:56 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:56.317 NVMe Write/Read with End-to-End data protection test 00:07:56.317 Attached to 0000:00:13.0 00:07:56.317 Attached to 0000:00:10.0 00:07:56.317 Attached to 0000:00:11.0 00:07:56.317 Attached to 0000:00:12.0 00:07:56.317 Cleaning up... 00:07:56.317 00:07:56.317 real 0m0.195s 00:07:56.317 user 0m0.057s 00:07:56.317 sys 0m0.089s 00:07:56.317 10:39:56 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.317 ************************************ 00:07:56.317 END TEST nvme_e2edp 00:07:56.317 ************************************ 00:07:56.317 10:39:56 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:56.579 10:39:56 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:56.579 10:39:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.579 10:39:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.579 10:39:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.579 ************************************ 00:07:56.579 START TEST nvme_reserve 00:07:56.579 ************************************ 00:07:56.579 10:39:56 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:56.579 ===================================================== 00:07:56.579 NVMe Controller at PCI bus 0, device 19, function 0 00:07:56.579 ===================================================== 00:07:56.579 Reservations: Not Supported 00:07:56.579 ===================================================== 00:07:56.579 NVMe Controller at PCI bus 0, device 16, function 0 00:07:56.579 ===================================================== 00:07:56.579 Reservations: Not Supported 00:07:56.579 ===================================================== 00:07:56.579 NVMe Controller at PCI bus 0, device 17, function 0 00:07:56.579 ===================================================== 00:07:56.579 Reservations: Not Supported 00:07:56.579 ===================================================== 00:07:56.579 NVMe Controller at PCI bus 0, device 18, function 0 00:07:56.579 ===================================================== 00:07:56.579 Reservations: Not Supported 00:07:56.579 Reservation test passed 00:07:56.579 00:07:56.579 real 0m0.183s 00:07:56.579 user 0m0.061s 00:07:56.579 sys 0m0.076s 00:07:56.579 10:39:56 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.579 10:39:56 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:56.579 ************************************ 00:07:56.579 END TEST nvme_reserve 00:07:56.579 ************************************ 00:07:56.841 10:39:56 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:56.841 10:39:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.841 10:39:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.841 10:39:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.841 ************************************ 00:07:56.841 START TEST nvme_err_injection 00:07:56.841 ************************************ 00:07:56.841 10:39:56 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:56.841 NVMe Error Injection test 00:07:56.841 Attached to 0000:00:13.0 00:07:56.841 Attached to 0000:00:10.0 00:07:56.841 Attached to 0000:00:11.0 00:07:56.841 Attached to 0000:00:12.0 00:07:56.841 0000:00:13.0: get features failed as expected 00:07:56.841 0000:00:10.0: get features failed as expected 00:07:56.841 0000:00:11.0: get features failed as expected 00:07:56.841 0000:00:12.0: get features failed as expected 00:07:56.841 0000:00:13.0: get features successfully as expected 00:07:56.841 0000:00:10.0: get features successfully as expected 00:07:56.841 0000:00:11.0: get features successfully as expected 00:07:56.841 0000:00:12.0: get features successfully as expected 00:07:56.841 0000:00:13.0: read failed as expected 00:07:56.841 0000:00:10.0: read failed as expected 00:07:56.841 0000:00:11.0: read failed as expected 00:07:56.841 0000:00:12.0: read failed as expected 00:07:56.841 0000:00:13.0: read successfully as expected 00:07:56.841 0000:00:10.0: read successfully as expected 00:07:56.841 0000:00:11.0: read successfully as expected 00:07:56.841 0000:00:12.0: read successfully as expected 00:07:56.841 Cleaning up... 00:07:56.841 00:07:56.841 real 0m0.196s 00:07:56.841 user 0m0.062s 00:07:56.841 sys 0m0.090s 00:07:56.841 ************************************ 00:07:56.841 END TEST nvme_err_injection 00:07:56.841 ************************************ 00:07:56.841 10:39:56 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.841 10:39:56 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:57.103 10:39:56 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:57.103 10:39:56 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:57.103 10:39:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.103 10:39:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.103 ************************************ 00:07:57.103 START TEST nvme_overhead 00:07:57.103 ************************************ 00:07:57.103 10:39:56 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:58.049 Initializing NVMe Controllers 00:07:58.049 Attached to 0000:00:13.0 00:07:58.049 Attached to 0000:00:10.0 00:07:58.049 Attached to 0000:00:11.0 00:07:58.049 Attached to 0000:00:12.0 00:07:58.049 Initialization complete. Launching workers. 00:07:58.049 submit (in ns) avg, min, max = 15191.8, 11985.4, 141969.2 00:07:58.049 complete (in ns) avg, min, max = 10368.6, 8270.0, 1044217.7 00:07:58.049 00:07:58.049 Submit histogram 00:07:58.049 ================ 00:07:58.049 Range in us Cumulative Count 00:07:58.049 11.963 - 12.012: 0.0331% ( 1) 00:07:58.049 12.455 - 12.505: 0.0662% ( 1) 00:07:58.049 12.554 - 12.603: 0.0993% ( 1) 00:07:58.049 12.603 - 12.702: 0.2648% ( 5) 00:07:58.049 12.702 - 12.800: 0.8606% ( 18) 00:07:58.049 12.800 - 12.898: 1.8868% ( 31) 00:07:58.049 12.898 - 12.997: 3.1447% ( 38) 00:07:58.049 12.997 - 13.095: 5.1970% ( 62) 00:07:58.049 13.095 - 13.194: 8.1430% ( 89) 00:07:58.049 13.194 - 13.292: 10.2946% ( 65) 00:07:58.049 13.292 - 13.391: 12.3800% ( 63) 00:07:58.049 13.391 - 13.489: 14.0682% ( 51) 00:07:58.049 13.489 - 13.588: 15.4916% ( 43) 00:07:58.049 13.588 - 13.686: 17.4777% ( 60) 00:07:58.049 13.686 - 13.785: 20.0265% ( 77) 00:07:58.049 13.785 - 13.883: 24.5945% ( 138) 00:07:58.049 13.883 - 13.982: 31.5458% ( 210) 00:07:58.049 13.982 - 14.080: 39.8544% ( 251) 00:07:58.049 14.080 - 14.178: 48.9242% ( 274) 00:07:58.049 14.178 - 14.277: 57.0672% ( 246) 00:07:58.049 14.277 - 14.375: 63.5220% ( 195) 00:07:58.049 14.375 - 14.474: 69.2155% ( 172) 00:07:58.049 14.474 - 14.572: 72.7905% ( 108) 00:07:58.049 14.572 - 14.671: 74.9090% ( 64) 00:07:58.049 14.671 - 14.769: 76.4978% ( 48) 00:07:58.049 14.769 - 14.868: 77.8550% ( 41) 00:07:58.049 14.868 - 14.966: 79.2122% ( 41) 00:07:58.049 14.966 - 15.065: 79.9404% ( 22) 00:07:58.049 15.065 - 15.163: 80.9004% ( 29) 00:07:58.049 15.163 - 15.262: 81.6617% ( 23) 00:07:58.049 15.262 - 15.360: 82.3568% ( 21) 00:07:58.049 15.360 - 15.458: 82.8203% ( 14) 00:07:58.049 15.458 - 15.557: 83.0851% ( 8) 00:07:58.049 15.557 - 15.655: 83.5154% ( 13) 00:07:58.049 15.655 - 15.754: 83.9457% ( 13) 00:07:58.049 15.754 - 15.852: 84.2767% ( 10) 00:07:58.049 15.852 - 15.951: 84.4753% ( 6) 00:07:58.049 15.951 - 16.049: 84.7402% ( 8) 00:07:58.049 16.049 - 16.148: 84.9057% ( 5) 00:07:58.049 16.148 - 16.246: 85.1043% ( 6) 00:07:58.049 16.246 - 16.345: 85.1705% ( 2) 00:07:58.049 16.345 - 16.443: 85.4684% ( 9) 00:07:58.049 16.640 - 16.738: 85.5677% ( 3) 00:07:58.049 16.738 - 16.837: 85.6008% ( 1) 00:07:58.049 16.837 - 16.935: 85.8656% ( 8) 00:07:58.049 16.935 - 17.034: 86.1304% ( 8) 00:07:58.049 17.034 - 17.132: 86.1966% ( 2) 00:07:58.049 17.132 - 17.231: 86.3290% ( 4) 00:07:58.049 17.231 - 17.329: 86.4945% ( 5) 00:07:58.049 17.329 - 17.428: 86.6600% ( 5) 00:07:58.049 17.428 - 17.526: 86.8587% ( 6) 00:07:58.049 17.526 - 17.625: 86.9911% ( 4) 00:07:58.049 17.625 - 17.723: 87.1897% ( 6) 00:07:58.049 17.723 - 17.822: 87.2890% ( 3) 00:07:58.049 17.822 - 17.920: 87.4876% ( 6) 00:07:58.049 17.920 - 18.018: 87.6862% ( 6) 00:07:58.049 18.018 - 18.117: 87.9510% ( 8) 00:07:58.049 18.117 - 18.215: 88.3151% ( 11) 00:07:58.049 18.215 - 18.314: 88.6792% ( 11) 00:07:58.049 18.314 - 18.412: 88.9772% ( 9) 00:07:58.049 18.412 - 18.511: 89.1096% ( 4) 00:07:58.049 18.511 - 18.609: 89.5730% ( 14) 00:07:58.049 18.609 - 18.708: 90.0695% ( 15) 00:07:58.049 18.708 - 18.806: 90.3343% ( 8) 00:07:58.049 18.806 - 18.905: 90.7977% ( 14) 00:07:58.049 18.905 - 19.003: 91.2281% ( 13) 00:07:58.049 19.003 - 19.102: 91.3605% ( 4) 00:07:58.049 19.102 - 19.200: 91.4929% ( 4) 00:07:58.049 19.200 - 19.298: 91.6915% ( 6) 00:07:58.049 19.298 - 19.397: 91.8239% ( 4) 00:07:58.049 19.397 - 19.495: 91.9894% ( 5) 00:07:58.049 19.495 - 19.594: 92.0887% ( 3) 00:07:58.049 19.594 - 19.692: 92.2211% ( 4) 00:07:58.049 19.692 - 19.791: 92.5190% ( 9) 00:07:58.049 19.791 - 19.889: 92.7176% ( 6) 00:07:58.049 19.889 - 19.988: 92.9494% ( 7) 00:07:58.049 19.988 - 20.086: 93.1811% ( 7) 00:07:58.049 20.086 - 20.185: 93.3797% ( 6) 00:07:58.049 20.185 - 20.283: 93.6114% ( 7) 00:07:58.049 20.283 - 20.382: 93.9755% ( 11) 00:07:58.049 20.382 - 20.480: 94.2072% ( 7) 00:07:58.049 20.480 - 20.578: 94.5051% ( 9) 00:07:58.049 20.578 - 20.677: 94.7037% ( 6) 00:07:58.049 20.677 - 20.775: 95.0348% ( 10) 00:07:58.049 20.775 - 20.874: 95.3658% ( 10) 00:07:58.049 20.874 - 20.972: 95.6306% ( 8) 00:07:58.049 20.972 - 21.071: 95.9947% ( 11) 00:07:58.049 21.071 - 21.169: 96.2595% ( 8) 00:07:58.049 21.169 - 21.268: 96.4912% ( 7) 00:07:58.049 21.268 - 21.366: 96.6236% ( 4) 00:07:58.049 21.366 - 21.465: 96.7560% ( 4) 00:07:58.049 21.465 - 21.563: 96.8222% ( 2) 00:07:58.049 21.563 - 21.662: 96.9215% ( 3) 00:07:58.049 21.662 - 21.760: 97.0540% ( 4) 00:07:58.049 21.760 - 21.858: 97.1864% ( 4) 00:07:58.049 21.858 - 21.957: 97.2526% ( 2) 00:07:58.049 21.957 - 22.055: 97.3850% ( 4) 00:07:58.049 22.055 - 22.154: 97.4843% ( 3) 00:07:58.049 22.154 - 22.252: 97.5505% ( 2) 00:07:58.049 22.252 - 22.351: 97.6167% ( 2) 00:07:58.049 22.351 - 22.449: 97.6829% ( 2) 00:07:58.049 22.449 - 22.548: 97.8484% ( 5) 00:07:58.049 22.548 - 22.646: 97.9146% ( 2) 00:07:58.049 22.646 - 22.745: 97.9808% ( 2) 00:07:58.049 22.745 - 22.843: 98.0470% ( 2) 00:07:58.049 22.843 - 22.942: 98.1463% ( 3) 00:07:58.049 22.942 - 23.040: 98.2125% ( 2) 00:07:58.049 23.040 - 23.138: 98.2787% ( 2) 00:07:58.049 23.138 - 23.237: 98.3780% ( 3) 00:07:58.049 23.237 - 23.335: 98.4111% ( 1) 00:07:58.049 23.335 - 23.434: 98.4442% ( 1) 00:07:58.049 23.434 - 23.532: 98.6097% ( 5) 00:07:58.049 23.532 - 23.631: 98.6428% ( 1) 00:07:58.049 23.631 - 23.729: 98.6759% ( 1) 00:07:58.049 23.828 - 23.926: 98.7421% ( 2) 00:07:58.049 24.123 - 24.222: 98.7752% ( 1) 00:07:58.049 24.320 - 24.418: 98.8083% ( 1) 00:07:58.049 24.418 - 24.517: 98.8414% ( 1) 00:07:58.049 24.517 - 24.615: 98.8745% ( 1) 00:07:58.049 24.615 - 24.714: 98.9407% ( 2) 00:07:58.049 24.714 - 24.812: 99.0070% ( 2) 00:07:58.049 25.009 - 25.108: 99.0732% ( 2) 00:07:58.049 25.108 - 25.206: 99.1063% ( 1) 00:07:58.049 25.403 - 25.600: 99.1394% ( 1) 00:07:58.049 25.797 - 25.994: 99.2718% ( 4) 00:07:58.049 25.994 - 26.191: 99.3049% ( 1) 00:07:58.049 26.978 - 27.175: 99.3380% ( 1) 00:07:58.049 31.114 - 31.311: 99.3711% ( 1) 00:07:58.049 31.902 - 32.098: 99.4042% ( 1) 00:07:58.049 32.492 - 32.689: 99.4373% ( 1) 00:07:58.049 32.886 - 33.083: 99.4704% ( 1) 00:07:58.049 33.083 - 33.280: 99.5035% ( 1) 00:07:58.049 33.674 - 33.871: 99.5366% ( 1) 00:07:58.049 35.052 - 35.249: 99.5697% ( 1) 00:07:58.049 35.840 - 36.037: 99.6028% ( 1) 00:07:58.049 37.612 - 37.809: 99.6359% ( 1) 00:07:58.049 38.794 - 38.991: 99.6690% ( 1) 00:07:58.049 40.369 - 40.566: 99.7021% ( 1) 00:07:58.049 42.929 - 43.126: 99.7352% ( 1) 00:07:58.049 44.111 - 44.308: 99.7683% ( 1) 00:07:58.050 49.231 - 49.428: 99.8014% ( 1) 00:07:58.050 61.046 - 61.440: 99.8345% ( 1) 00:07:58.050 66.166 - 66.560: 99.8676% ( 1) 00:07:58.050 113.428 - 114.215: 99.9338% ( 2) 00:07:58.050 121.305 - 122.092: 99.9669% ( 1) 00:07:58.050 141.785 - 142.572: 100.0000% ( 1) 00:07:58.050 00:07:58.050 Complete histogram 00:07:58.050 ================== 00:07:58.050 Range in us Cumulative Count 00:07:58.050 8.222 - 8.271: 0.0331% ( 1) 00:07:58.050 8.271 - 8.320: 0.1986% ( 5) 00:07:58.050 8.320 - 8.369: 1.7213% ( 46) 00:07:58.050 8.369 - 8.418: 6.2562% ( 137) 00:07:58.050 8.418 - 8.468: 13.4393% ( 217) 00:07:58.050 8.468 - 8.517: 23.9325% ( 317) 00:07:58.050 8.517 - 8.566: 35.1539% ( 339) 00:07:58.050 8.566 - 8.615: 44.1576% ( 272) 00:07:58.050 8.615 - 8.665: 51.1420% ( 211) 00:07:58.050 8.665 - 8.714: 57.3320% ( 187) 00:07:58.050 8.714 - 8.763: 61.9331% ( 139) 00:07:58.050 8.763 - 8.812: 65.6074% ( 111) 00:07:58.050 8.812 - 8.862: 67.8252% ( 67) 00:07:58.050 8.862 - 8.911: 69.6127% ( 54) 00:07:58.050 8.911 - 8.960: 70.5065% ( 27) 00:07:58.050 8.960 - 9.009: 71.4995% ( 30) 00:07:58.050 9.009 - 9.058: 72.1615% ( 20) 00:07:58.050 9.058 - 9.108: 72.6912% ( 16) 00:07:58.050 9.108 - 9.157: 73.1546% ( 14) 00:07:58.050 9.157 - 9.206: 73.3863% ( 7) 00:07:58.050 9.206 - 9.255: 73.5849% ( 6) 00:07:58.050 9.255 - 9.305: 73.6842% ( 3) 00:07:58.050 9.305 - 9.354: 73.7835% ( 3) 00:07:58.050 9.354 - 9.403: 73.8497% ( 2) 00:07:58.050 9.403 - 9.452: 73.9159% ( 2) 00:07:58.050 9.502 - 9.551: 73.9490% ( 1) 00:07:58.050 9.551 - 9.600: 74.0814% ( 4) 00:07:58.050 9.649 - 9.698: 74.1145% ( 1) 00:07:58.050 9.698 - 9.748: 74.1476% ( 1) 00:07:58.050 9.748 - 9.797: 74.1807% ( 1) 00:07:58.050 9.797 - 9.846: 74.2138% ( 1) 00:07:58.050 9.846 - 9.895: 74.3793% ( 5) 00:07:58.050 9.895 - 9.945: 74.4786% ( 3) 00:07:58.050 9.945 - 9.994: 74.5780% ( 3) 00:07:58.050 9.994 - 10.043: 74.6111% ( 1) 00:07:58.050 10.043 - 10.092: 74.8759% ( 8) 00:07:58.050 10.092 - 10.142: 75.1076% ( 7) 00:07:58.050 10.142 - 10.191: 75.5379% ( 13) 00:07:58.050 10.191 - 10.240: 75.7696% ( 7) 00:07:58.050 10.240 - 10.289: 76.1999% ( 13) 00:07:58.050 10.289 - 10.338: 76.6303% ( 13) 00:07:58.050 10.338 - 10.388: 76.9282% ( 9) 00:07:58.050 10.388 - 10.437: 77.1930% ( 8) 00:07:58.050 10.437 - 10.486: 77.3254% ( 4) 00:07:58.050 10.486 - 10.535: 77.4909% ( 5) 00:07:58.050 10.535 - 10.585: 77.6895% ( 6) 00:07:58.050 10.585 - 10.634: 77.7557% ( 2) 00:07:58.050 10.634 - 10.683: 77.8550% ( 3) 00:07:58.050 10.683 - 10.732: 77.9874% ( 4) 00:07:58.050 10.732 - 10.782: 78.0867% ( 3) 00:07:58.050 10.782 - 10.831: 78.1198% ( 1) 00:07:58.050 10.880 - 10.929: 78.2191% ( 3) 00:07:58.050 10.929 - 10.978: 78.2522% ( 1) 00:07:58.050 11.077 - 11.126: 78.2853% ( 1) 00:07:58.050 11.175 - 11.225: 78.3515% ( 2) 00:07:58.050 11.225 - 11.274: 78.3846% ( 1) 00:07:58.050 11.323 - 11.372: 78.4508% ( 2) 00:07:58.050 11.372 - 11.422: 78.5501% ( 3) 00:07:58.050 11.422 - 11.471: 78.6164% ( 2) 00:07:58.050 11.471 - 11.520: 78.6826% ( 2) 00:07:58.050 11.520 - 11.569: 78.7488% ( 2) 00:07:58.050 11.569 - 11.618: 78.8150% ( 2) 00:07:58.050 11.618 - 11.668: 78.9143% ( 3) 00:07:58.050 11.668 - 11.717: 79.0136% ( 3) 00:07:58.050 11.717 - 11.766: 79.0798% ( 2) 00:07:58.050 11.815 - 11.865: 79.1791% ( 3) 00:07:58.050 11.865 - 11.914: 79.4108% ( 7) 00:07:58.050 11.914 - 11.963: 79.5763% ( 5) 00:07:58.050 11.963 - 12.012: 79.6425% ( 2) 00:07:58.050 12.012 - 12.062: 79.9073% ( 8) 00:07:58.050 12.062 - 12.111: 80.1721% ( 8) 00:07:58.050 12.111 - 12.160: 80.4038% ( 7) 00:07:58.050 12.160 - 12.209: 80.5362% ( 4) 00:07:58.050 12.209 - 12.258: 80.5693% ( 1) 00:07:58.050 12.258 - 12.308: 80.7680% ( 6) 00:07:58.050 12.308 - 12.357: 80.8011% ( 1) 00:07:58.050 12.357 - 12.406: 80.9004% ( 3) 00:07:58.050 12.406 - 12.455: 80.9335% ( 1) 00:07:58.050 12.455 - 12.505: 81.0328% ( 3) 00:07:58.050 12.505 - 12.554: 81.1652% ( 4) 00:07:58.050 12.554 - 12.603: 81.3969% ( 7) 00:07:58.050 12.603 - 12.702: 81.7941% ( 12) 00:07:58.050 12.702 - 12.800: 82.4892% ( 21) 00:07:58.050 12.800 - 12.898: 83.6809% ( 36) 00:07:58.050 12.898 - 12.997: 85.6339% ( 59) 00:07:58.050 12.997 - 13.095: 87.4876% ( 56) 00:07:58.050 13.095 - 13.194: 89.6723% ( 66) 00:07:58.050 13.194 - 13.292: 91.3274% ( 50) 00:07:58.050 13.292 - 13.391: 93.1149% ( 54) 00:07:58.050 13.391 - 13.489: 94.7699% ( 50) 00:07:58.050 13.489 - 13.588: 95.6968% ( 28) 00:07:58.050 13.588 - 13.686: 96.1933% ( 15) 00:07:58.050 13.686 - 13.785: 96.4581% ( 8) 00:07:58.050 13.785 - 13.883: 96.8222% ( 11) 00:07:58.050 13.883 - 13.982: 97.0540% ( 7) 00:07:58.050 13.982 - 14.080: 97.1202% ( 2) 00:07:58.050 14.080 - 14.178: 97.1864% ( 2) 00:07:58.050 14.178 - 14.277: 97.2195% ( 1) 00:07:58.050 14.375 - 14.474: 97.2526% ( 1) 00:07:58.050 14.769 - 14.868: 97.2857% ( 1) 00:07:58.050 14.966 - 15.065: 97.3188% ( 1) 00:07:58.050 15.163 - 15.262: 97.3519% ( 1) 00:07:58.050 15.360 - 15.458: 97.3850% ( 1) 00:07:58.050 15.458 - 15.557: 97.4512% ( 2) 00:07:58.050 15.557 - 15.655: 97.5836% ( 4) 00:07:58.312 15.655 - 15.754: 97.7491% ( 5) 00:07:58.312 15.754 - 15.852: 97.9146% ( 5) 00:07:58.312 15.852 - 15.951: 98.0801% ( 5) 00:07:58.312 15.951 - 16.049: 98.1463% ( 2) 00:07:58.312 16.049 - 16.148: 98.1794% ( 1) 00:07:58.312 16.148 - 16.246: 98.2456% ( 2) 00:07:58.312 16.246 - 16.345: 98.3118% ( 2) 00:07:58.312 16.443 - 16.542: 98.3780% ( 2) 00:07:58.312 16.542 - 16.640: 98.4442% ( 2) 00:07:58.312 16.640 - 16.738: 98.4773% ( 1) 00:07:58.312 16.738 - 16.837: 98.5104% ( 1) 00:07:58.312 16.837 - 16.935: 98.5435% ( 1) 00:07:58.312 17.132 - 17.231: 98.5766% ( 1) 00:07:58.312 17.231 - 17.329: 98.6097% ( 1) 00:07:58.312 17.329 - 17.428: 98.6428% ( 1) 00:07:58.312 17.428 - 17.526: 98.6759% ( 1) 00:07:58.312 17.526 - 17.625: 98.7090% ( 1) 00:07:58.312 17.920 - 18.018: 98.7421% ( 1) 00:07:58.312 18.314 - 18.412: 98.7752% ( 1) 00:07:58.312 19.102 - 19.200: 98.8083% ( 1) 00:07:58.312 19.397 - 19.495: 98.8414% ( 1) 00:07:58.312 20.382 - 20.480: 98.8745% ( 1) 00:07:58.312 20.677 - 20.775: 98.9738% ( 3) 00:07:58.312 20.775 - 20.874: 99.0401% ( 2) 00:07:58.312 20.972 - 21.071: 99.1063% ( 2) 00:07:58.312 21.071 - 21.169: 99.1394% ( 1) 00:07:58.312 21.169 - 21.268: 99.1725% ( 1) 00:07:58.312 21.268 - 21.366: 99.2056% ( 1) 00:07:58.312 21.366 - 21.465: 99.2387% ( 1) 00:07:58.312 21.465 - 21.563: 99.2718% ( 1) 00:07:58.312 21.563 - 21.662: 99.3049% ( 1) 00:07:58.312 21.760 - 21.858: 99.3380% ( 1) 00:07:58.312 22.154 - 22.252: 99.3711% ( 1) 00:07:58.312 22.351 - 22.449: 99.4042% ( 1) 00:07:58.312 22.843 - 22.942: 99.4373% ( 1) 00:07:58.312 22.942 - 23.040: 99.4704% ( 1) 00:07:58.312 23.237 - 23.335: 99.5035% ( 1) 00:07:58.312 23.434 - 23.532: 99.5366% ( 1) 00:07:58.312 25.797 - 25.994: 99.5697% ( 1) 00:07:58.312 25.994 - 26.191: 99.6028% ( 1) 00:07:58.312 27.766 - 27.963: 99.6690% ( 2) 00:07:58.312 27.963 - 28.160: 99.7021% ( 1) 00:07:58.312 28.751 - 28.948: 99.7352% ( 1) 00:07:58.312 29.538 - 29.735: 99.7683% ( 1) 00:07:58.312 32.492 - 32.689: 99.8014% ( 1) 00:07:58.312 37.218 - 37.415: 99.8345% ( 1) 00:07:58.312 46.671 - 46.868: 99.8676% ( 1) 00:07:58.312 63.803 - 64.197: 99.9007% ( 1) 00:07:58.312 277.268 - 278.843: 99.9338% ( 1) 00:07:58.312 348.160 - 349.735: 99.9669% ( 1) 00:07:58.312 1039.754 - 1046.055: 100.0000% ( 1) 00:07:58.312 00:07:58.312 00:07:58.312 real 0m1.192s 00:07:58.312 user 0m1.057s 00:07:58.312 sys 0m0.083s 00:07:58.312 10:39:58 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.312 10:39:58 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:58.312 ************************************ 00:07:58.312 END TEST nvme_overhead 00:07:58.312 ************************************ 00:07:58.312 10:39:58 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:58.312 10:39:58 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:58.312 10:39:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.312 10:39:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.312 ************************************ 00:07:58.312 START TEST nvme_arbitration 00:07:58.312 ************************************ 00:07:58.312 10:39:58 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:01.616 Initializing NVMe Controllers 00:08:01.616 Attached to 0000:00:13.0 00:08:01.616 Attached to 0000:00:10.0 00:08:01.616 Attached to 0000:00:11.0 00:08:01.616 Attached to 0000:00:12.0 00:08:01.616 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:01.616 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:01.616 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:01.616 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:01.616 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:01.616 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:01.616 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:01.616 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:01.616 Initialization complete. Launching workers. 00:08:01.616 Starting thread on core 1 with urgent priority queue 00:08:01.616 Starting thread on core 2 with urgent priority queue 00:08:01.616 Starting thread on core 3 with urgent priority queue 00:08:01.616 Starting thread on core 0 with urgent priority queue 00:08:01.616 QEMU NVMe Ctrl (12343 ) core 0: 3776.00 IO/s 26.48 secs/100000 ios 00:08:01.616 QEMU NVMe Ctrl (12342 ) core 0: 3776.00 IO/s 26.48 secs/100000 ios 00:08:01.616 QEMU NVMe Ctrl (12340 ) core 1: 3797.33 IO/s 26.33 secs/100000 ios 00:08:01.616 QEMU NVMe Ctrl (12342 ) core 1: 3797.33 IO/s 26.33 secs/100000 ios 00:08:01.616 QEMU NVMe Ctrl (12341 ) core 2: 3456.00 IO/s 28.94 secs/100000 ios 00:08:01.617 QEMU NVMe Ctrl (12342 ) core 3: 3456.00 IO/s 28.94 secs/100000 ios 00:08:01.617 ======================================================== 00:08:01.617 00:08:01.617 00:08:01.617 real 0m3.203s 00:08:01.617 user 0m8.994s 00:08:01.617 sys 0m0.109s 00:08:01.617 10:40:01 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.617 ************************************ 00:08:01.617 END TEST nvme_arbitration 00:08:01.617 ************************************ 00:08:01.617 10:40:01 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:01.617 10:40:01 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:01.617 10:40:01 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:01.617 10:40:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.617 10:40:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.617 ************************************ 00:08:01.617 START TEST nvme_single_aen 00:08:01.617 ************************************ 00:08:01.617 10:40:01 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:01.617 Asynchronous Event Request test 00:08:01.617 Attached to 0000:00:13.0 00:08:01.617 Attached to 0000:00:10.0 00:08:01.617 Attached to 0000:00:11.0 00:08:01.617 Attached to 0000:00:12.0 00:08:01.617 Reset controller to setup AER completions for this process 00:08:01.617 Registering asynchronous event callbacks... 00:08:01.617 Getting orig temperature thresholds of all controllers 00:08:01.617 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.617 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.617 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.617 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.617 Setting all controllers temperature threshold low to trigger AER 00:08:01.617 Waiting for all controllers temperature threshold to be set lower 00:08:01.617 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.617 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:01.617 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.617 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:01.617 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.617 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:01.617 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.617 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:01.617 Waiting for all controllers to trigger AER and reset threshold 00:08:01.617 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.617 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.617 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.617 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.617 Cleaning up... 00:08:01.617 00:08:01.617 real 0m0.182s 00:08:01.617 user 0m0.058s 00:08:01.617 sys 0m0.081s 00:08:01.617 ************************************ 00:08:01.617 END TEST nvme_single_aen 00:08:01.617 ************************************ 00:08:01.617 10:40:01 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.617 10:40:01 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:01.878 10:40:01 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:01.878 10:40:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.878 10:40:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.878 10:40:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.878 ************************************ 00:08:01.878 START TEST nvme_doorbell_aers 00:08:01.878 ************************************ 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:01.878 10:40:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:02.140 [2024-12-16 10:40:01.895163] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:12.153 Executing: test_write_invalid_db 00:08:12.153 Waiting for AER completion... 00:08:12.153 Failure: test_write_invalid_db 00:08:12.153 00:08:12.153 Executing: test_invalid_db_write_overflow_sq 00:08:12.153 Waiting for AER completion... 00:08:12.153 Failure: test_invalid_db_write_overflow_sq 00:08:12.153 00:08:12.153 Executing: test_invalid_db_write_overflow_cq 00:08:12.153 Waiting for AER completion... 00:08:12.153 Failure: test_invalid_db_write_overflow_cq 00:08:12.153 00:08:12.153 10:40:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:12.153 10:40:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:12.153 [2024-12-16 10:40:11.910323] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:22.146 Executing: test_write_invalid_db 00:08:22.146 Waiting for AER completion... 00:08:22.146 Failure: test_write_invalid_db 00:08:22.146 00:08:22.146 Executing: test_invalid_db_write_overflow_sq 00:08:22.146 Waiting for AER completion... 00:08:22.146 Failure: test_invalid_db_write_overflow_sq 00:08:22.146 00:08:22.146 Executing: test_invalid_db_write_overflow_cq 00:08:22.146 Waiting for AER completion... 00:08:22.146 Failure: test_invalid_db_write_overflow_cq 00:08:22.146 00:08:22.146 10:40:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:22.146 10:40:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:22.146 [2024-12-16 10:40:21.933408] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:32.112 Executing: test_write_invalid_db 00:08:32.112 Waiting for AER completion... 00:08:32.112 Failure: test_write_invalid_db 00:08:32.112 00:08:32.112 Executing: test_invalid_db_write_overflow_sq 00:08:32.112 Waiting for AER completion... 00:08:32.112 Failure: test_invalid_db_write_overflow_sq 00:08:32.112 00:08:32.112 Executing: test_invalid_db_write_overflow_cq 00:08:32.112 Waiting for AER completion... 00:08:32.112 Failure: test_invalid_db_write_overflow_cq 00:08:32.112 00:08:32.112 10:40:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:32.112 10:40:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:32.112 [2024-12-16 10:40:31.947201] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 Executing: test_write_invalid_db 00:08:42.087 Waiting for AER completion... 00:08:42.087 Failure: test_write_invalid_db 00:08:42.087 00:08:42.087 Executing: test_invalid_db_write_overflow_sq 00:08:42.087 Waiting for AER completion... 00:08:42.087 Failure: test_invalid_db_write_overflow_sq 00:08:42.087 00:08:42.087 Executing: test_invalid_db_write_overflow_cq 00:08:42.087 Waiting for AER completion... 00:08:42.087 Failure: test_invalid_db_write_overflow_cq 00:08:42.087 00:08:42.087 00:08:42.087 real 0m40.182s 00:08:42.087 user 0m34.237s 00:08:42.087 sys 0m5.582s 00:08:42.087 10:40:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.087 10:40:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 ************************************ 00:08:42.087 END TEST nvme_doorbell_aers 00:08:42.087 ************************************ 00:08:42.087 10:40:41 nvme -- nvme/nvme.sh@97 -- # uname 00:08:42.087 10:40:41 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:42.087 10:40:41 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:42.087 10:40:41 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:42.087 10:40:41 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.087 10:40:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 ************************************ 00:08:42.087 START TEST nvme_multi_aen 00:08:42.087 ************************************ 00:08:42.087 10:40:41 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:42.087 [2024-12-16 10:40:41.998454] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:41.998526] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:41.998538] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:41.999625] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:41.999645] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:41.999653] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:42.000939] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:42.000963] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:42.000971] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:42.001987] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:42.002009] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 [2024-12-16 10:40:42.002016] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75096) is not found. Dropping the request. 00:08:42.087 Child process pid: 75618 00:08:42.346 [Child] Asynchronous Event Request test 00:08:42.346 [Child] Attached to 0000:00:13.0 00:08:42.346 [Child] Attached to 0000:00:10.0 00:08:42.346 [Child] Attached to 0000:00:11.0 00:08:42.346 [Child] Attached to 0000:00:12.0 00:08:42.346 [Child] Registering asynchronous event callbacks... 00:08:42.346 [Child] Getting orig temperature thresholds of all controllers 00:08:42.346 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.346 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.346 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.346 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.346 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:42.346 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.346 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.346 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.346 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.346 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.346 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.346 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.346 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.346 [Child] Cleaning up... 00:08:42.346 Asynchronous Event Request test 00:08:42.346 Attached to 0000:00:13.0 00:08:42.346 Attached to 0000:00:10.0 00:08:42.346 Attached to 0000:00:11.0 00:08:42.346 Attached to 0000:00:12.0 00:08:42.346 Reset controller to setup AER completions for this process 00:08:42.346 Registering asynchronous event callbacks... 00:08:42.346 Getting orig temperature thresholds of all controllers 00:08:42.346 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.346 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.346 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.346 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.346 Setting all controllers temperature threshold low to trigger AER 00:08:42.346 Waiting for all controllers temperature threshold to be set lower 00:08:42.346 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.346 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:42.346 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.346 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:42.346 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.346 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:42.346 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.346 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:42.346 Waiting for all controllers to trigger AER and reset threshold 00:08:42.346 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.346 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.346 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.346 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.346 Cleaning up... 00:08:42.346 00:08:42.346 real 0m0.327s 00:08:42.346 user 0m0.105s 00:08:42.346 sys 0m0.130s 00:08:42.346 10:40:42 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.346 10:40:42 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:42.346 ************************************ 00:08:42.346 END TEST nvme_multi_aen 00:08:42.346 ************************************ 00:08:42.346 10:40:42 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:42.346 10:40:42 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:42.346 10:40:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.346 10:40:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.346 ************************************ 00:08:42.346 START TEST nvme_startup 00:08:42.346 ************************************ 00:08:42.346 10:40:42 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:42.605 Initializing NVMe Controllers 00:08:42.605 Attached to 0000:00:13.0 00:08:42.605 Attached to 0000:00:10.0 00:08:42.605 Attached to 0000:00:11.0 00:08:42.605 Attached to 0000:00:12.0 00:08:42.605 Initialization complete. 00:08:42.605 Time used:132404.844 (us). 00:08:42.605 00:08:42.605 real 0m0.178s 00:08:42.605 user 0m0.050s 00:08:42.605 sys 0m0.074s 00:08:42.605 10:40:42 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.605 10:40:42 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:42.605 ************************************ 00:08:42.605 END TEST nvme_startup 00:08:42.605 ************************************ 00:08:42.605 10:40:42 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:42.605 10:40:42 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.605 10:40:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.605 10:40:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.605 ************************************ 00:08:42.605 START TEST nvme_multi_secondary 00:08:42.605 ************************************ 00:08:42.605 10:40:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:42.605 10:40:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75668 00:08:42.605 10:40:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75669 00:08:42.605 10:40:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:42.605 10:40:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:42.605 10:40:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:45.898 Initializing NVMe Controllers 00:08:45.898 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.898 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.898 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.898 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.898 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:45.898 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:45.898 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:45.898 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:45.898 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:45.898 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:45.898 Initialization complete. Launching workers. 00:08:45.898 ======================================================== 00:08:45.898 Latency(us) 00:08:45.898 Device Information : IOPS MiB/s Average min max 00:08:45.898 PCIE (0000:00:13.0) NSID 1 from core 2: 2324.79 9.08 6881.90 788.47 27940.87 00:08:45.898 PCIE (0000:00:10.0) NSID 1 from core 2: 2324.79 9.08 6883.46 774.65 28673.50 00:08:45.899 PCIE (0000:00:11.0) NSID 1 from core 2: 2324.79 9.08 6891.39 725.33 25885.14 00:08:45.899 PCIE (0000:00:12.0) NSID 1 from core 2: 2324.79 9.08 6891.30 776.30 27907.19 00:08:45.899 PCIE (0000:00:12.0) NSID 2 from core 2: 2324.79 9.08 6902.36 776.39 29178.18 00:08:45.899 PCIE (0000:00:12.0) NSID 3 from core 2: 2324.79 9.08 6902.51 790.66 27160.08 00:08:45.899 ======================================================== 00:08:45.899 Total : 13948.76 54.49 6892.15 725.33 29178.18 00:08:45.899 00:08:45.899 10:40:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75668 00:08:45.899 Initializing NVMe Controllers 00:08:45.899 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.899 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.899 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.899 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.899 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:45.899 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:45.899 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:45.899 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:45.899 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:45.899 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:45.899 Initialization complete. Launching workers. 00:08:45.899 ======================================================== 00:08:45.899 Latency(us) 00:08:45.899 Device Information : IOPS MiB/s Average min max 00:08:45.899 PCIE (0000:00:13.0) NSID 1 from core 1: 5394.91 21.07 2965.19 1000.42 12835.37 00:08:45.899 PCIE (0000:00:10.0) NSID 1 from core 1: 5394.91 21.07 2963.99 911.36 12872.19 00:08:45.899 PCIE (0000:00:11.0) NSID 1 from core 1: 5394.91 21.07 2966.26 971.55 13322.06 00:08:45.899 PCIE (0000:00:12.0) NSID 1 from core 1: 5394.91 21.07 2966.50 1032.93 13809.66 00:08:45.899 PCIE (0000:00:12.0) NSID 2 from core 1: 5394.91 21.07 2966.38 918.14 11543.65 00:08:45.899 PCIE (0000:00:12.0) NSID 3 from core 1: 5394.91 21.07 2966.28 874.55 14142.52 00:08:45.899 ======================================================== 00:08:45.899 Total : 32369.47 126.44 2965.77 874.55 14142.52 00:08:45.899 00:08:47.803 Initializing NVMe Controllers 00:08:47.803 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.803 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.803 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.803 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.803 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:47.803 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:47.803 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:47.803 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:47.803 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:47.803 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:47.803 Initialization complete. Launching workers. 00:08:47.803 ======================================================== 00:08:47.803 Latency(us) 00:08:47.803 Device Information : IOPS MiB/s Average min max 00:08:47.803 PCIE (0000:00:13.0) NSID 1 from core 0: 8254.48 32.24 1937.89 696.88 14714.63 00:08:47.803 PCIE (0000:00:10.0) NSID 1 from core 0: 8254.48 32.24 1936.90 685.87 14621.55 00:08:47.803 PCIE (0000:00:11.0) NSID 1 from core 0: 8254.48 32.24 1937.84 692.57 12435.35 00:08:47.803 PCIE (0000:00:12.0) NSID 1 from core 0: 8254.48 32.24 1937.81 700.94 13318.00 00:08:47.803 PCIE (0000:00:12.0) NSID 2 from core 0: 8254.48 32.24 1937.78 630.10 11885.12 00:08:47.803 PCIE (0000:00:12.0) NSID 3 from core 0: 8254.48 32.24 1937.76 519.26 13987.75 00:08:47.803 ======================================================== 00:08:47.803 Total : 49526.88 193.46 1937.66 519.26 14714.63 00:08:47.803 00:08:47.803 10:40:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75669 00:08:47.803 10:40:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75738 00:08:47.803 10:40:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:47.803 10:40:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75739 00:08:47.803 10:40:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:47.803 10:40:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:51.087 Initializing NVMe Controllers 00:08:51.087 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.087 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.087 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.087 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.087 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:51.087 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:51.087 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:51.087 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:51.087 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:51.087 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:51.088 Initialization complete. Launching workers. 00:08:51.088 ======================================================== 00:08:51.088 Latency(us) 00:08:51.088 Device Information : IOPS MiB/s Average min max 00:08:51.088 PCIE (0000:00:13.0) NSID 1 from core 0: 8263.82 32.28 1935.72 753.12 5650.97 00:08:51.088 PCIE (0000:00:10.0) NSID 1 from core 0: 8263.82 32.28 1934.89 731.70 5771.09 00:08:51.088 PCIE (0000:00:11.0) NSID 1 from core 0: 8263.82 32.28 1935.79 750.89 5681.03 00:08:51.088 PCIE (0000:00:12.0) NSID 1 from core 0: 8263.82 32.28 1935.86 751.28 5630.15 00:08:51.088 PCIE (0000:00:12.0) NSID 2 from core 0: 8263.82 32.28 1936.01 761.26 5200.21 00:08:51.088 PCIE (0000:00:12.0) NSID 3 from core 0: 8263.82 32.28 1936.08 756.33 5320.07 00:08:51.088 ======================================================== 00:08:51.088 Total : 49582.89 193.68 1935.73 731.70 5771.09 00:08:51.088 00:08:51.088 Initializing NVMe Controllers 00:08:51.088 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.088 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.088 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.088 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.088 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:51.088 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:51.088 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:51.088 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:51.088 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:51.088 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:51.088 Initialization complete. Launching workers. 00:08:51.088 ======================================================== 00:08:51.088 Latency(us) 00:08:51.088 Device Information : IOPS MiB/s Average min max 00:08:51.088 PCIE (0000:00:13.0) NSID 1 from core 1: 8058.46 31.48 1985.04 728.87 5495.81 00:08:51.088 PCIE (0000:00:10.0) NSID 1 from core 1: 8058.46 31.48 1984.28 714.01 6299.27 00:08:51.088 PCIE (0000:00:11.0) NSID 1 from core 1: 8058.46 31.48 1985.38 732.78 5939.56 00:08:51.088 PCIE (0000:00:12.0) NSID 1 from core 1: 8058.46 31.48 1985.46 733.43 5966.15 00:08:51.088 PCIE (0000:00:12.0) NSID 2 from core 1: 8058.46 31.48 1985.43 724.88 6129.71 00:08:51.088 PCIE (0000:00:12.0) NSID 3 from core 1: 8058.46 31.48 1985.41 726.03 5385.07 00:08:51.088 ======================================================== 00:08:51.088 Total : 48350.76 188.87 1985.17 714.01 6299.27 00:08:51.088 00:08:53.626 Initializing NVMe Controllers 00:08:53.626 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.626 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.626 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.626 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.626 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:53.626 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:53.626 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:53.626 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:53.626 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:53.626 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:53.626 Initialization complete. Launching workers. 00:08:53.626 ======================================================== 00:08:53.626 Latency(us) 00:08:53.626 Device Information : IOPS MiB/s Average min max 00:08:53.626 PCIE (0000:00:13.0) NSID 1 from core 2: 3713.28 14.50 4308.23 729.65 13256.55 00:08:53.626 PCIE (0000:00:10.0) NSID 1 from core 2: 3713.28 14.50 4307.32 732.92 13084.08 00:08:53.626 PCIE (0000:00:11.0) NSID 1 from core 2: 3713.28 14.50 4308.46 727.70 12324.30 00:08:53.626 PCIE (0000:00:12.0) NSID 1 from core 2: 3713.28 14.50 4308.35 743.31 13163.15 00:08:53.626 PCIE (0000:00:12.0) NSID 2 from core 2: 3713.28 14.50 4308.03 745.01 13686.70 00:08:53.626 PCIE (0000:00:12.0) NSID 3 from core 2: 3713.28 14.50 4307.94 733.57 16709.08 00:08:53.626 ======================================================== 00:08:53.626 Total : 22279.67 87.03 4308.05 727.70 16709.08 00:08:53.626 00:08:53.626 10:40:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75738 00:08:53.626 10:40:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75739 00:08:53.626 00:08:53.626 real 0m10.601s 00:08:53.626 user 0m18.223s 00:08:53.626 sys 0m0.526s 00:08:53.626 10:40:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.626 10:40:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:53.626 ************************************ 00:08:53.626 END TEST nvme_multi_secondary 00:08:53.626 ************************************ 00:08:53.626 10:40:53 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:53.626 10:40:53 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:53.626 10:40:53 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/74707 ]] 00:08:53.626 10:40:53 nvme -- common/autotest_common.sh@1090 -- # kill 74707 00:08:53.626 10:40:53 nvme -- common/autotest_common.sh@1091 -- # wait 74707 00:08:53.626 [2024-12-16 10:40:53.094674] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.626 [2024-12-16 10:40:53.095408] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.626 [2024-12-16 10:40:53.095451] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.626 [2024-12-16 10:40:53.095474] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.626 [2024-12-16 10:40:53.096702] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.626 [2024-12-16 10:40:53.096784] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.626 [2024-12-16 10:40:53.096806] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.096827] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.098067] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.098226] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.098276] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.098325] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.099424] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.099562] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.099616] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.099660] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75617) is not found. Dropping the request. 00:08:53.627 [2024-12-16 10:40:53.177152] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:53.627 10:40:53 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:53.627 10:40:53 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:53.627 10:40:53 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:53.627 10:40:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.627 10:40:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.627 10:40:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.627 ************************************ 00:08:53.627 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:53.627 ************************************ 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:53.627 * Looking for test storage... 00:08:53.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.627 --rc genhtml_branch_coverage=1 00:08:53.627 --rc genhtml_function_coverage=1 00:08:53.627 --rc genhtml_legend=1 00:08:53.627 --rc geninfo_all_blocks=1 00:08:53.627 --rc geninfo_unexecuted_blocks=1 00:08:53.627 00:08:53.627 ' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.627 --rc genhtml_branch_coverage=1 00:08:53.627 --rc genhtml_function_coverage=1 00:08:53.627 --rc genhtml_legend=1 00:08:53.627 --rc geninfo_all_blocks=1 00:08:53.627 --rc geninfo_unexecuted_blocks=1 00:08:53.627 00:08:53.627 ' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.627 --rc genhtml_branch_coverage=1 00:08:53.627 --rc genhtml_function_coverage=1 00:08:53.627 --rc genhtml_legend=1 00:08:53.627 --rc geninfo_all_blocks=1 00:08:53.627 --rc geninfo_unexecuted_blocks=1 00:08:53.627 00:08:53.627 ' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.627 --rc genhtml_branch_coverage=1 00:08:53.627 --rc genhtml_function_coverage=1 00:08:53.627 --rc genhtml_legend=1 00:08:53.627 --rc geninfo_all_blocks=1 00:08:53.627 --rc geninfo_unexecuted_blocks=1 00:08:53.627 00:08:53.627 ' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75902 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75902 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 75902 ']' 00:08:53.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.627 10:40:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.627 [2024-12-16 10:40:53.474317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:53.627 [2024-12-16 10:40:53.474432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75902 ] 00:08:53.888 [2024-12-16 10:40:53.613191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.888 [2024-12-16 10:40:53.648905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.888 [2024-12-16 10:40:53.649144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.888 [2024-12-16 10:40:53.649362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.888 [2024-12-16 10:40:53.649384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.457 nvme0n1 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ivYtB.txt 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.457 true 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734345654 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75925 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:54.457 10:40:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:56.364 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:56.364 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.364 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.622 [2024-12-16 10:40:56.354650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:56.622 [2024-12-16 10:40:56.355169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:56.622 [2024-12-16 10:40:56.355202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:56.622 [2024-12-16 10:40:56.355218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.622 [2024-12-16 10:40:56.357051] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:56.622 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75925 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75925 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75925 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ivYtB.txt 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:56.622 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ivYtB.txt 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75902 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 75902 ']' 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 75902 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75902 00:08:56.623 killing process with pid 75902 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75902' 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 75902 00:08:56.623 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 75902 00:08:56.881 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:56.881 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:56.881 ************************************ 00:08:56.881 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:56.881 ************************************ 00:08:56.881 00:08:56.881 real 0m3.542s 00:08:56.881 user 0m12.592s 00:08:56.881 sys 0m0.456s 00:08:56.881 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.881 10:40:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 10:40:56 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:56.881 10:40:56 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:56.881 10:40:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.881 10:40:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.881 10:40:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 ************************************ 00:08:56.881 START TEST nvme_fio 00:08:56.881 ************************************ 00:08:56.881 10:40:56 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:56.881 10:40:56 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:56.881 10:40:56 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:56.881 10:40:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:56.881 10:40:56 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:56.881 10:40:56 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:56.881 10:40:56 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:56.881 10:40:56 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:56.881 10:40:56 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:56.881 10:40:56 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:56.881 10:40:56 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:56.881 10:40:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:56.881 10:40:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:56.881 10:40:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:56.881 10:40:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:56.882 10:40:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.140 10:40:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.140 10:40:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:57.399 10:40:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.399 10:40:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.399 10:40:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.657 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.657 fio-3.35 00:08:57.657 Starting 1 thread 00:09:04.260 00:09:04.260 test: (groupid=0, jobs=1): err= 0: pid=76048: Mon Dec 16 10:41:03 2024 00:09:04.260 read: IOPS=25.0k, BW=97.9MiB/s (103MB/s)(196MiB/2001msec) 00:09:04.260 slat (nsec): min=4215, max=61451, avg=4747.65, stdev=1585.53 00:09:04.260 clat (usec): min=226, max=7267, avg=2552.62, stdev=583.85 00:09:04.260 lat (usec): min=230, max=7277, avg=2557.37, stdev=584.74 00:09:04.260 clat percentiles (usec): 00:09:04.260 | 1.00th=[ 1729], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:04.260 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:04.260 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2802], 95.00th=[ 3326], 00:09:04.260 | 99.00th=[ 5538], 99.50th=[ 6128], 99.90th=[ 6980], 99.95th=[ 7111], 00:09:04.260 | 99.99th=[ 7242] 00:09:04.260 bw ( KiB/s): min=94784, max=101576, per=98.60%, avg=98800.00, stdev=3561.74, samples=3 00:09:04.260 iops : min=23696, max=25394, avg=24700.00, stdev=890.44, samples=3 00:09:04.260 write: IOPS=24.9k, BW=97.3MiB/s (102MB/s)(195MiB/2001msec); 0 zone resets 00:09:04.260 slat (usec): min=4, max=166, avg= 5.02, stdev= 1.81 00:09:04.260 clat (usec): min=249, max=7251, avg=2554.18, stdev=581.77 00:09:04.260 lat (usec): min=254, max=7261, avg=2559.20, stdev=582.71 00:09:04.260 clat percentiles (usec): 00:09:04.260 | 1.00th=[ 1713], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:04.260 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:04.260 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2802], 95.00th=[ 3326], 00:09:04.260 | 99.00th=[ 5538], 99.50th=[ 6194], 99.90th=[ 6783], 99.95th=[ 7111], 00:09:04.260 | 99.99th=[ 7242] 00:09:04.260 bw ( KiB/s): min=95224, max=101552, per=99.25%, avg=98861.33, stdev=3268.49, samples=3 00:09:04.260 iops : min=23806, max=25388, avg=24715.33, stdev=817.12, samples=3 00:09:04.260 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:09:04.260 lat (msec) : 2=2.79%, 4=93.72%, 10=3.43% 00:09:04.260 cpu : usr=99.30%, sys=0.10%, ctx=6, majf=0, minf=626 00:09:04.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:04.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.260 issued rwts: total=50125,49827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.260 00:09:04.260 Run status group 0 (all jobs): 00:09:04.260 READ: bw=97.9MiB/s (103MB/s), 97.9MiB/s-97.9MiB/s (103MB/s-103MB/s), io=196MiB (205MB), run=2001-2001msec 00:09:04.260 WRITE: bw=97.3MiB/s (102MB/s), 97.3MiB/s-97.3MiB/s (102MB/s-102MB/s), io=195MiB (204MB), run=2001-2001msec 00:09:04.260 ----------------------------------------------------- 00:09:04.260 Suppressions used: 00:09:04.260 count bytes template 00:09:04.260 1 32 /usr/src/fio/parse.c 00:09:04.260 1 8 libtcmalloc_minimal.so 00:09:04.260 ----------------------------------------------------- 00:09:04.260 00:09:04.260 10:41:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:04.260 10:41:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:04.260 10:41:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:04.260 10:41:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:04.518 10:41:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:04.518 10:41:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:04.776 10:41:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:04.776 10:41:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:04.776 10:41:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:04.776 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:04.776 fio-3.35 00:09:04.776 Starting 1 thread 00:09:11.344 00:09:11.344 test: (groupid=0, jobs=1): err= 0: pid=76104: Mon Dec 16 10:41:11 2024 00:09:11.344 read: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(181MiB/2001msec) 00:09:11.344 slat (nsec): min=3316, max=73045, avg=4846.80, stdev=2022.45 00:09:11.344 clat (usec): min=263, max=10443, avg=2759.85, stdev=916.30 00:09:11.344 lat (usec): min=267, max=10487, avg=2764.69, stdev=917.28 00:09:11.344 clat percentiles (usec): 00:09:11.344 | 1.00th=[ 1598], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2278], 00:09:11.344 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2540], 00:09:11.344 | 70.00th=[ 2671], 80.00th=[ 2933], 90.00th=[ 3949], 95.00th=[ 5014], 00:09:11.344 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 8356], 00:09:11.344 | 99.99th=[10290] 00:09:11.344 bw ( KiB/s): min=88560, max=96960, per=100.00%, avg=93592.00, stdev=4440.35, samples=3 00:09:11.344 iops : min=22140, max=24240, avg=23398.00, stdev=1110.09, samples=3 00:09:11.344 write: IOPS=23.0k, BW=89.7MiB/s (94.1MB/s)(180MiB/2001msec); 0 zone resets 00:09:11.344 slat (nsec): min=3415, max=82565, avg=5042.50, stdev=2114.28 00:09:11.344 clat (usec): min=255, max=10375, avg=2778.68, stdev=931.26 00:09:11.344 lat (usec): min=259, max=10388, avg=2783.72, stdev=932.24 00:09:11.344 clat percentiles (usec): 00:09:11.344 | 1.00th=[ 1614], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2278], 00:09:11.344 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:11.344 | 70.00th=[ 2704], 80.00th=[ 2999], 90.00th=[ 4015], 95.00th=[ 5080], 00:09:11.344 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7570], 99.95th=[ 8848], 00:09:11.344 | 99.99th=[ 9896] 00:09:11.344 bw ( KiB/s): min=89536, max=96672, per=100.00%, avg=93664.00, stdev=3697.49, samples=3 00:09:11.344 iops : min=22384, max=24168, avg=23415.33, stdev=924.07, samples=3 00:09:11.344 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:09:11.344 lat (msec) : 2=3.08%, 4=86.96%, 10=9.88%, 20=0.01% 00:09:11.344 cpu : usr=99.20%, sys=0.00%, ctx=16, majf=0, minf=625 00:09:11.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:11.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.344 issued rwts: total=46217,45955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.344 00:09:11.344 Run status group 0 (all jobs): 00:09:11.344 READ: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=181MiB (189MB), run=2001-2001msec 00:09:11.344 WRITE: bw=89.7MiB/s (94.1MB/s), 89.7MiB/s-89.7MiB/s (94.1MB/s-94.1MB/s), io=180MiB (188MB), run=2001-2001msec 00:09:11.344 ----------------------------------------------------- 00:09:11.344 Suppressions used: 00:09:11.344 count bytes template 00:09:11.344 1 32 /usr/src/fio/parse.c 00:09:11.344 1 8 libtcmalloc_minimal.so 00:09:11.344 ----------------------------------------------------- 00:09:11.344 00:09:11.344 10:41:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:11.344 10:41:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:11.344 10:41:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:11.344 10:41:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:11.604 10:41:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:11.604 10:41:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:11.866 10:41:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:11.866 10:41:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:11.866 10:41:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:11.866 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:11.866 fio-3.35 00:09:11.866 Starting 1 thread 00:09:19.983 00:09:19.983 test: (groupid=0, jobs=1): err= 0: pid=76159: Mon Dec 16 10:41:18 2024 00:09:19.983 read: IOPS=23.6k, BW=92.0MiB/s (96.5MB/s)(184MiB/2001msec) 00:09:19.983 slat (nsec): min=3354, max=70069, avg=4842.62, stdev=1845.72 00:09:19.983 clat (usec): min=200, max=9740, avg=2711.59, stdev=836.21 00:09:19.983 lat (usec): min=205, max=9775, avg=2716.43, stdev=837.09 00:09:19.983 clat percentiles (usec): 00:09:19.983 | 1.00th=[ 1827], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:19.983 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:19.983 | 70.00th=[ 2606], 80.00th=[ 2835], 90.00th=[ 3589], 95.00th=[ 4752], 00:09:19.983 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 7963], 00:09:19.983 | 99.99th=[ 9503] 00:09:19.983 bw ( KiB/s): min=90104, max=93880, per=97.58%, avg=91962.67, stdev=1888.68, samples=3 00:09:19.983 iops : min=22526, max=23470, avg=22990.67, stdev=472.17, samples=3 00:09:19.983 write: IOPS=23.4k, BW=91.4MiB/s (95.8MB/s)(183MiB/2001msec); 0 zone resets 00:09:19.983 slat (nsec): min=3474, max=77500, avg=5030.92, stdev=1872.64 00:09:19.983 clat (usec): min=283, max=9553, avg=2721.67, stdev=839.23 00:09:19.983 lat (usec): min=288, max=9563, avg=2726.71, stdev=840.06 00:09:19.983 clat percentiles (usec): 00:09:19.983 | 1.00th=[ 1827], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:19.983 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:19.983 | 70.00th=[ 2606], 80.00th=[ 2868], 90.00th=[ 3621], 95.00th=[ 4752], 00:09:19.983 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 7570], 99.95th=[ 8029], 00:09:19.983 | 99.99th=[ 9110] 00:09:19.983 bw ( KiB/s): min=89496, max=95272, per=98.35%, avg=92048.00, stdev=2946.05, samples=3 00:09:19.983 iops : min=22374, max=23818, avg=23012.00, stdev=736.51, samples=3 00:09:19.983 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:19.983 lat (msec) : 2=2.51%, 4=89.27%, 10=8.18% 00:09:19.983 cpu : usr=99.10%, sys=0.20%, ctx=3, majf=0, minf=625 00:09:19.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:19.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.983 issued rwts: total=47146,46819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.983 00:09:19.983 Run status group 0 (all jobs): 00:09:19.983 READ: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=184MiB (193MB), run=2001-2001msec 00:09:19.983 WRITE: bw=91.4MiB/s (95.8MB/s), 91.4MiB/s-91.4MiB/s (95.8MB/s-95.8MB/s), io=183MiB (192MB), run=2001-2001msec 00:09:19.983 ----------------------------------------------------- 00:09:19.983 Suppressions used: 00:09:19.983 count bytes template 00:09:19.983 1 32 /usr/src/fio/parse.c 00:09:19.983 1 8 libtcmalloc_minimal.so 00:09:19.983 ----------------------------------------------------- 00:09:19.983 00:09:19.983 10:41:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:19.983 10:41:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:19.983 10:41:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:19.983 10:41:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:19.983 10:41:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:19.983 10:41:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:19.983 10:41:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:19.983 10:41:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:19.983 10:41:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.983 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:19.983 fio-3.35 00:09:19.984 Starting 1 thread 00:09:28.097 00:09:28.097 test: (groupid=0, jobs=1): err= 0: pid=76214: Mon Dec 16 10:41:26 2024 00:09:28.097 read: IOPS=23.9k, BW=93.4MiB/s (97.9MB/s)(187MiB/2001msec) 00:09:28.097 slat (usec): min=4, max=188, avg= 4.93, stdev= 2.26 00:09:28.097 clat (usec): min=298, max=11964, avg=2673.41, stdev=848.77 00:09:28.097 lat (usec): min=302, max=12027, avg=2678.34, stdev=849.94 00:09:28.097 clat percentiles (usec): 00:09:28.097 | 1.00th=[ 1942], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:28.097 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:28.097 | 70.00th=[ 2507], 80.00th=[ 2671], 90.00th=[ 3195], 95.00th=[ 4686], 00:09:28.097 | 99.00th=[ 6456], 99.50th=[ 7242], 99.90th=[ 8455], 99.95th=[ 9503], 00:09:28.097 | 99.99th=[11731] 00:09:28.097 bw ( KiB/s): min=90472, max=103408, per=100.00%, avg=98354.67, stdev=6916.56, samples=3 00:09:28.097 iops : min=22618, max=25852, avg=24588.67, stdev=1729.14, samples=3 00:09:28.097 write: IOPS=23.8k, BW=92.8MiB/s (97.3MB/s)(186MiB/2001msec); 0 zone resets 00:09:28.097 slat (usec): min=4, max=475, avg= 5.17, stdev= 3.01 00:09:28.097 clat (usec): min=288, max=11830, avg=2679.03, stdev=852.26 00:09:28.097 lat (usec): min=294, max=11844, avg=2684.20, stdev=853.47 00:09:28.097 clat percentiles (usec): 00:09:28.097 | 1.00th=[ 1909], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:28.097 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:28.097 | 70.00th=[ 2507], 80.00th=[ 2671], 90.00th=[ 3195], 95.00th=[ 4686], 00:09:28.097 | 99.00th=[ 6521], 99.50th=[ 7242], 99.90th=[ 8586], 99.95th=[ 9896], 00:09:28.097 | 99.99th=[11600] 00:09:28.097 bw ( KiB/s): min=89912, max=104400, per=100.00%, avg=98378.67, stdev=7547.20, samples=3 00:09:28.097 iops : min=22478, max=26100, avg=24594.67, stdev=1886.80, samples=3 00:09:28.097 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.03% 00:09:28.097 lat (msec) : 2=1.24%, 4=91.69%, 10=6.96%, 20=0.05% 00:09:28.097 cpu : usr=98.80%, sys=0.15%, ctx=25, majf=0, minf=624 00:09:28.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:28.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.097 issued rwts: total=47825,47535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.097 00:09:28.097 Run status group 0 (all jobs): 00:09:28.097 READ: bw=93.4MiB/s (97.9MB/s), 93.4MiB/s-93.4MiB/s (97.9MB/s-97.9MB/s), io=187MiB (196MB), run=2001-2001msec 00:09:28.097 WRITE: bw=92.8MiB/s (97.3MB/s), 92.8MiB/s-92.8MiB/s (97.3MB/s-97.3MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:28.098 ----------------------------------------------------- 00:09:28.098 Suppressions used: 00:09:28.098 count bytes template 00:09:28.098 1 32 /usr/src/fio/parse.c 00:09:28.098 1 8 libtcmalloc_minimal.so 00:09:28.098 ----------------------------------------------------- 00:09:28.098 00:09:28.098 10:41:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:28.098 10:41:27 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:28.098 00:09:28.098 real 0m30.364s 00:09:28.098 user 0m16.968s 00:09:28.098 sys 0m24.957s 00:09:28.098 10:41:27 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.098 10:41:27 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:28.098 ************************************ 00:09:28.098 END TEST nvme_fio 00:09:28.098 ************************************ 00:09:28.098 00:09:28.098 real 1m37.759s 00:09:28.098 user 3m31.612s 00:09:28.098 sys 0m34.871s 00:09:28.098 10:41:27 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.098 10:41:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.098 ************************************ 00:09:28.098 END TEST nvme 00:09:28.098 ************************************ 00:09:28.098 10:41:27 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:28.098 10:41:27 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:28.098 10:41:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.098 10:41:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.098 10:41:27 -- common/autotest_common.sh@10 -- # set +x 00:09:28.098 ************************************ 00:09:28.098 START TEST nvme_scc 00:09:28.098 ************************************ 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:28.098 * Looking for test storage... 00:09:28.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:28.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.098 --rc genhtml_branch_coverage=1 00:09:28.098 --rc genhtml_function_coverage=1 00:09:28.098 --rc genhtml_legend=1 00:09:28.098 --rc geninfo_all_blocks=1 00:09:28.098 --rc geninfo_unexecuted_blocks=1 00:09:28.098 00:09:28.098 ' 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:28.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.098 --rc genhtml_branch_coverage=1 00:09:28.098 --rc genhtml_function_coverage=1 00:09:28.098 --rc genhtml_legend=1 00:09:28.098 --rc geninfo_all_blocks=1 00:09:28.098 --rc geninfo_unexecuted_blocks=1 00:09:28.098 00:09:28.098 ' 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:28.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.098 --rc genhtml_branch_coverage=1 00:09:28.098 --rc genhtml_function_coverage=1 00:09:28.098 --rc genhtml_legend=1 00:09:28.098 --rc geninfo_all_blocks=1 00:09:28.098 --rc geninfo_unexecuted_blocks=1 00:09:28.098 00:09:28.098 ' 00:09:28.098 10:41:27 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:28.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.098 --rc genhtml_branch_coverage=1 00:09:28.098 --rc genhtml_function_coverage=1 00:09:28.098 --rc genhtml_legend=1 00:09:28.098 --rc geninfo_all_blocks=1 00:09:28.098 --rc geninfo_unexecuted_blocks=1 00:09:28.098 00:09:28.098 ' 00:09:28.098 10:41:27 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.098 10:41:27 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.098 10:41:27 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.098 10:41:27 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.098 10:41:27 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.098 10:41:27 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:28.098 10:41:27 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:28.098 10:41:27 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:28.098 10:41:27 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.098 10:41:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:28.098 10:41:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:28.098 10:41:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:28.098 10:41:27 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:28.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.098 Waiting for block devices as requested 00:09:28.098 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.098 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.098 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.358 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.648 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:33.648 10:41:33 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:33.648 10:41:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:33.648 10:41:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:33.648 10:41:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:33.648 10:41:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.648 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:33.649 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.650 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:33.651 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:33.652 10:41:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:33.652 10:41:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:33.652 10:41:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:33.652 10:41:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:33.652 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.653 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.654 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.655 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.656 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:33.657 10:41:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:33.657 10:41:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:33.657 10:41:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:33.657 10:41:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:33.657 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.658 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:33.659 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.660 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:33.661 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.662 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.663 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:33.664 10:41:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:33.664 10:41:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:33.664 10:41:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:33.664 10:41:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:33.664 10:41:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:33.665 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.666 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:33.667 10:41:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:33.667 10:41:33 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:33.668 10:41:33 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:33.668 10:41:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:33.668 10:41:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:33.668 10:41:33 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.554 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.554 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.554 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.554 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.813 10:41:34 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:34.813 10:41:34 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:34.813 10:41:34 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.813 10:41:34 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:34.813 ************************************ 00:09:34.813 START TEST nvme_simple_copy 00:09:34.813 ************************************ 00:09:34.813 10:41:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:35.071 Initializing NVMe Controllers 00:09:35.071 Attaching to 0000:00:10.0 00:09:35.071 Controller supports SCC. Attached to 0000:00:10.0 00:09:35.071 Namespace ID: 1 size: 6GB 00:09:35.071 Initialization complete. 00:09:35.071 00:09:35.071 Controller QEMU NVMe Ctrl (12340 ) 00:09:35.071 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:35.071 Namespace Block Size:4096 00:09:35.071 Writing LBAs 0 to 63 with Random Data 00:09:35.071 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:35.071 LBAs matching Written Data: 64 00:09:35.071 ************************************ 00:09:35.071 END TEST nvme_simple_copy 00:09:35.071 00:09:35.071 real 0m0.253s 00:09:35.071 user 0m0.094s 00:09:35.071 sys 0m0.057s 00:09:35.071 10:41:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.071 10:41:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:35.071 ************************************ 00:09:35.071 00:09:35.071 real 0m7.602s 00:09:35.071 user 0m1.033s 00:09:35.071 sys 0m1.390s 00:09:35.071 ************************************ 00:09:35.071 10:41:34 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.071 10:41:34 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:35.071 END TEST nvme_scc 00:09:35.071 ************************************ 00:09:35.071 10:41:34 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:35.071 10:41:34 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:35.071 10:41:34 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:35.072 10:41:34 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:35.072 10:41:34 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:35.072 10:41:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:35.072 10:41:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.072 10:41:34 -- common/autotest_common.sh@10 -- # set +x 00:09:35.072 ************************************ 00:09:35.072 START TEST nvme_fdp 00:09:35.072 ************************************ 00:09:35.072 10:41:34 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:35.072 * Looking for test storage... 00:09:35.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:35.072 10:41:34 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.072 10:41:34 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.072 10:41:34 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.072 10:41:35 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:35.072 10:41:35 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.072 10:41:35 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.072 --rc genhtml_branch_coverage=1 00:09:35.072 --rc genhtml_function_coverage=1 00:09:35.072 --rc genhtml_legend=1 00:09:35.072 --rc geninfo_all_blocks=1 00:09:35.072 --rc geninfo_unexecuted_blocks=1 00:09:35.072 00:09:35.072 ' 00:09:35.072 10:41:35 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.072 --rc genhtml_branch_coverage=1 00:09:35.072 --rc genhtml_function_coverage=1 00:09:35.072 --rc genhtml_legend=1 00:09:35.072 --rc geninfo_all_blocks=1 00:09:35.072 --rc geninfo_unexecuted_blocks=1 00:09:35.072 00:09:35.072 ' 00:09:35.072 10:41:35 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.072 --rc genhtml_branch_coverage=1 00:09:35.072 --rc genhtml_function_coverage=1 00:09:35.072 --rc genhtml_legend=1 00:09:35.072 --rc geninfo_all_blocks=1 00:09:35.072 --rc geninfo_unexecuted_blocks=1 00:09:35.072 00:09:35.072 ' 00:09:35.072 10:41:35 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.072 --rc genhtml_branch_coverage=1 00:09:35.072 --rc genhtml_function_coverage=1 00:09:35.072 --rc genhtml_legend=1 00:09:35.072 --rc geninfo_all_blocks=1 00:09:35.072 --rc geninfo_unexecuted_blocks=1 00:09:35.072 00:09:35.072 ' 00:09:35.072 10:41:35 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.072 10:41:35 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.072 10:41:35 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.072 10:41:35 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.072 10:41:35 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.072 10:41:35 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:35.072 10:41:35 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:35.072 10:41:35 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:35.072 10:41:35 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.072 10:41:35 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:35.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.598 Waiting for block devices as requested 00:09:35.598 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.598 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.856 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.856 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.132 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:41.132 10:41:40 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:41.132 10:41:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:41.132 10:41:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:41.132 10:41:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.132 10:41:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.132 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.133 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:41.134 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:41.135 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:41.136 10:41:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:41.136 10:41:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:41.136 10:41:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:41.137 10:41:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.137 10:41:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:41.137 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:41.138 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:41.139 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.140 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:41.141 10:41:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:41.141 10:41:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:41.141 10:41:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.141 10:41:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.141 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.142 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.143 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.144 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.145 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.146 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:41.147 10:41:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:41.148 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:41.149 10:41:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:41.149 10:41:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:41.149 10:41:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.149 10:41:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.149 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.150 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:41.151 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:41.152 10:41:40 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:41.152 10:41:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:41.153 10:41:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:41.153 10:41:41 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:41.153 10:41:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:41.153 10:41:41 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:41.153 10:41:41 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.978 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.978 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.978 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.978 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.978 10:41:41 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:41.978 10:41:41 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:41.978 10:41:41 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.978 10:41:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:41.978 ************************************ 00:09:41.978 START TEST nvme_flexible_data_placement 00:09:41.978 ************************************ 00:09:41.978 10:41:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:42.236 Initializing NVMe Controllers 00:09:42.236 Attaching to 0000:00:13.0 00:09:42.236 Controller supports FDP Attached to 0000:00:13.0 00:09:42.236 Namespace ID: 1 Endurance Group ID: 1 00:09:42.236 Initialization complete. 00:09:42.236 00:09:42.236 ================================== 00:09:42.236 == FDP tests for Namespace: #01 == 00:09:42.236 ================================== 00:09:42.236 00:09:42.236 Get Feature: FDP: 00:09:42.236 ================= 00:09:42.236 Enabled: Yes 00:09:42.236 FDP configuration Index: 0 00:09:42.236 00:09:42.236 FDP configurations log page 00:09:42.236 =========================== 00:09:42.236 Number of FDP configurations: 1 00:09:42.236 Version: 0 00:09:42.236 Size: 112 00:09:42.236 FDP Configuration Descriptor: 0 00:09:42.236 Descriptor Size: 96 00:09:42.236 Reclaim Group Identifier format: 2 00:09:42.236 FDP Volatile Write Cache: Not Present 00:09:42.236 FDP Configuration: Valid 00:09:42.236 Vendor Specific Size: 0 00:09:42.236 Number of Reclaim Groups: 2 00:09:42.236 Number of Recalim Unit Handles: 8 00:09:42.236 Max Placement Identifiers: 128 00:09:42.236 Number of Namespaces Suppprted: 256 00:09:42.236 Reclaim unit Nominal Size: 6000000 bytes 00:09:42.236 Estimated Reclaim Unit Time Limit: Not Reported 00:09:42.236 RUH Desc #000: RUH Type: Initially Isolated 00:09:42.236 RUH Desc #001: RUH Type: Initially Isolated 00:09:42.236 RUH Desc #002: RUH Type: Initially Isolated 00:09:42.236 RUH Desc #003: RUH Type: Initially Isolated 00:09:42.236 RUH Desc #004: RUH Type: Initially Isolated 00:09:42.236 RUH Desc #005: RUH Type: Initially Isolated 00:09:42.236 RUH Desc #006: RUH Type: Initially Isolated 00:09:42.236 RUH Desc #007: RUH Type: Initially Isolated 00:09:42.236 00:09:42.236 FDP reclaim unit handle usage log page 00:09:42.236 ====================================== 00:09:42.236 Number of Reclaim Unit Handles: 8 00:09:42.236 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:42.236 RUH Usage Desc #001: RUH Attributes: Unused 00:09:42.236 RUH Usage Desc #002: RUH Attributes: Unused 00:09:42.236 RUH Usage Desc #003: RUH Attributes: Unused 00:09:42.236 RUH Usage Desc #004: RUH Attributes: Unused 00:09:42.236 RUH Usage Desc #005: RUH Attributes: Unused 00:09:42.236 RUH Usage Desc #006: RUH Attributes: Unused 00:09:42.236 RUH Usage Desc #007: RUH Attributes: Unused 00:09:42.236 00:09:42.236 FDP statistics log page 00:09:42.236 ======================= 00:09:42.236 Host bytes with metadata written: 1510776832 00:09:42.236 Media bytes with metadata written: 1511010304 00:09:42.236 Media bytes erased: 0 00:09:42.236 00:09:42.236 FDP Reclaim unit handle status 00:09:42.236 ============================== 00:09:42.236 Number of RUHS descriptors: 2 00:09:42.236 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005f36 00:09:42.236 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:42.236 00:09:42.236 FDP write on placement id: 0 success 00:09:42.236 00:09:42.236 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:42.236 00:09:42.236 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:42.236 00:09:42.236 Get Feature: FDP Events for Placement handle: #0 00:09:42.236 ======================== 00:09:42.236 Number of FDP Events: 6 00:09:42.236 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:42.236 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:42.236 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:42.236 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:42.236 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:42.236 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:42.236 00:09:42.236 FDP events log page 00:09:42.236 =================== 00:09:42.236 Number of FDP events: 1 00:09:42.236 FDP Event #0: 00:09:42.236 Event Type: RU Not Written to Capacity 00:09:42.236 Placement Identifier: Valid 00:09:42.236 NSID: Valid 00:09:42.236 Location: Valid 00:09:42.236 Placement Identifier: 0 00:09:42.236 Event Timestamp: 6 00:09:42.236 Namespace Identifier: 1 00:09:42.236 Reclaim Group Identifier: 0 00:09:42.236 Reclaim Unit Handle Identifier: 0 00:09:42.236 00:09:42.236 FDP test passed 00:09:42.236 00:09:42.236 real 0m0.204s 00:09:42.236 user 0m0.051s 00:09:42.236 sys 0m0.052s 00:09:42.236 10:41:42 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.236 10:41:42 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:42.236 ************************************ 00:09:42.236 END TEST nvme_flexible_data_placement 00:09:42.236 ************************************ 00:09:42.236 ************************************ 00:09:42.236 END TEST nvme_fdp 00:09:42.236 ************************************ 00:09:42.236 00:09:42.236 real 0m7.286s 00:09:42.236 user 0m0.963s 00:09:42.236 sys 0m1.269s 00:09:42.236 10:41:42 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.236 10:41:42 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:42.236 10:41:42 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:42.236 10:41:42 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:42.237 10:41:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:42.237 10:41:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.237 10:41:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.495 ************************************ 00:09:42.495 START TEST nvme_rpc 00:09:42.495 ************************************ 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:42.495 * Looking for test storage... 00:09:42.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.495 10:41:42 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:42.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.495 --rc genhtml_branch_coverage=1 00:09:42.495 --rc genhtml_function_coverage=1 00:09:42.495 --rc genhtml_legend=1 00:09:42.495 --rc geninfo_all_blocks=1 00:09:42.495 --rc geninfo_unexecuted_blocks=1 00:09:42.495 00:09:42.495 ' 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:42.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.495 --rc genhtml_branch_coverage=1 00:09:42.495 --rc genhtml_function_coverage=1 00:09:42.495 --rc genhtml_legend=1 00:09:42.495 --rc geninfo_all_blocks=1 00:09:42.495 --rc geninfo_unexecuted_blocks=1 00:09:42.495 00:09:42.495 ' 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:42.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.495 --rc genhtml_branch_coverage=1 00:09:42.495 --rc genhtml_function_coverage=1 00:09:42.495 --rc genhtml_legend=1 00:09:42.495 --rc geninfo_all_blocks=1 00:09:42.495 --rc geninfo_unexecuted_blocks=1 00:09:42.495 00:09:42.495 ' 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:42.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.495 --rc genhtml_branch_coverage=1 00:09:42.495 --rc genhtml_function_coverage=1 00:09:42.495 --rc genhtml_legend=1 00:09:42.495 --rc geninfo_all_blocks=1 00:09:42.495 --rc geninfo_unexecuted_blocks=1 00:09:42.495 00:09:42.495 ' 00:09:42.495 10:41:42 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.495 10:41:42 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.495 10:41:42 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:42.495 10:41:42 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:42.496 10:41:42 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77568 00:09:42.496 10:41:42 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:42.496 10:41:42 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:42.496 10:41:42 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77568 00:09:42.496 10:41:42 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77568 ']' 00:09:42.496 10:41:42 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.496 10:41:42 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.496 10:41:42 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.496 10:41:42 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.496 10:41:42 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.754 [2024-12-16 10:41:42.499809] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:42.754 [2024-12-16 10:41:42.499922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77568 ] 00:09:42.754 [2024-12-16 10:41:42.634456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.754 [2024-12-16 10:41:42.667075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.754 [2024-12-16 10:41:42.667133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.689 10:41:43 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.689 10:41:43 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:43.689 10:41:43 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:43.689 Nvme0n1 00:09:43.689 10:41:43 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:43.689 10:41:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:43.949 request: 00:09:43.949 { 00:09:43.949 "bdev_name": "Nvme0n1", 00:09:43.949 "filename": "non_existing_file", 00:09:43.949 "method": "bdev_nvme_apply_firmware", 00:09:43.949 "req_id": 1 00:09:43.949 } 00:09:43.949 Got JSON-RPC error response 00:09:43.949 response: 00:09:43.949 { 00:09:43.949 "code": -32603, 00:09:43.949 "message": "open file failed." 00:09:43.949 } 00:09:43.949 10:41:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:43.949 10:41:43 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:43.949 10:41:43 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:44.208 10:41:43 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:44.208 10:41:43 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77568 00:09:44.208 10:41:43 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77568 ']' 00:09:44.208 10:41:43 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77568 00:09:44.208 10:41:43 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:44.208 10:41:43 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.208 10:41:43 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77568 00:09:44.208 10:41:44 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.208 10:41:44 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.208 killing process with pid 77568 00:09:44.208 10:41:44 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77568' 00:09:44.208 10:41:44 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77568 00:09:44.208 10:41:44 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77568 00:09:44.466 00:09:44.466 real 0m2.050s 00:09:44.466 user 0m4.030s 00:09:44.466 sys 0m0.453s 00:09:44.466 10:41:44 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.466 10:41:44 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.466 ************************************ 00:09:44.466 END TEST nvme_rpc 00:09:44.466 ************************************ 00:09:44.466 10:41:44 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:44.466 10:41:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.466 10:41:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.466 10:41:44 -- common/autotest_common.sh@10 -- # set +x 00:09:44.466 ************************************ 00:09:44.466 START TEST nvme_rpc_timeouts 00:09:44.466 ************************************ 00:09:44.466 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:44.466 * Looking for test storage... 00:09:44.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:44.466 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:44.466 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:44.466 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:44.466 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.466 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.724 10:41:44 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:44.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.724 --rc genhtml_branch_coverage=1 00:09:44.724 --rc genhtml_function_coverage=1 00:09:44.724 --rc genhtml_legend=1 00:09:44.724 --rc geninfo_all_blocks=1 00:09:44.724 --rc geninfo_unexecuted_blocks=1 00:09:44.724 00:09:44.724 ' 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:44.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.724 --rc genhtml_branch_coverage=1 00:09:44.724 --rc genhtml_function_coverage=1 00:09:44.724 --rc genhtml_legend=1 00:09:44.724 --rc geninfo_all_blocks=1 00:09:44.724 --rc geninfo_unexecuted_blocks=1 00:09:44.724 00:09:44.724 ' 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:44.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.724 --rc genhtml_branch_coverage=1 00:09:44.724 --rc genhtml_function_coverage=1 00:09:44.724 --rc genhtml_legend=1 00:09:44.724 --rc geninfo_all_blocks=1 00:09:44.724 --rc geninfo_unexecuted_blocks=1 00:09:44.724 00:09:44.724 ' 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:44.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.724 --rc genhtml_branch_coverage=1 00:09:44.724 --rc genhtml_function_coverage=1 00:09:44.724 --rc genhtml_legend=1 00:09:44.724 --rc geninfo_all_blocks=1 00:09:44.724 --rc geninfo_unexecuted_blocks=1 00:09:44.724 00:09:44.724 ' 00:09:44.724 10:41:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.724 10:41:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77622 00:09:44.724 10:41:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77622 00:09:44.724 10:41:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77654 00:09:44.724 10:41:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:44.724 10:41:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77654 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 77654 ']' 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.724 10:41:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.724 10:41:44 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:44.724 [2024-12-16 10:41:44.529667] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:44.724 [2024-12-16 10:41:44.529783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77654 ] 00:09:44.724 [2024-12-16 10:41:44.665786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.724 [2024-12-16 10:41:44.698775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.724 [2024-12-16 10:41:44.698856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.662 10:41:45 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.662 10:41:45 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:45.662 Checking default timeout settings: 00:09:45.662 10:41:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:45.662 10:41:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:45.925 Making settings changes with rpc: 00:09:45.925 10:41:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:45.925 10:41:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:45.925 Check default vs. modified settings: 00:09:45.925 10:41:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:45.925 10:41:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77622 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77622 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:46.499 Setting action_on_timeout is changed as expected. 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77622 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77622 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:46.499 Setting timeout_us is changed as expected. 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77622 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77622 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:46.499 Setting timeout_admin_us is changed as expected. 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77622 /tmp/settings_modified_77622 00:09:46.499 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77654 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 77654 ']' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 77654 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77654 00:09:46.499 killing process with pid 77654 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77654' 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 77654 00:09:46.499 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 77654 00:09:46.758 RPC TIMEOUT SETTING TEST PASSED. 00:09:46.758 10:41:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:46.758 00:09:46.758 real 0m2.333s 00:09:46.758 user 0m4.710s 00:09:46.758 sys 0m0.468s 00:09:46.758 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.758 10:41:46 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:46.758 ************************************ 00:09:46.758 END TEST nvme_rpc_timeouts 00:09:46.758 ************************************ 00:09:46.758 10:41:46 -- spdk/autotest.sh@239 -- # uname -s 00:09:46.758 10:41:46 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:46.758 10:41:46 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.758 10:41:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.758 10:41:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.758 10:41:46 -- common/autotest_common.sh@10 -- # set +x 00:09:46.758 ************************************ 00:09:46.758 START TEST sw_hotplug 00:09:46.758 ************************************ 00:09:46.758 10:41:46 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:47.017 * Looking for test storage... 00:09:47.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.017 10:41:46 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.017 --rc genhtml_branch_coverage=1 00:09:47.017 --rc genhtml_function_coverage=1 00:09:47.017 --rc genhtml_legend=1 00:09:47.017 --rc geninfo_all_blocks=1 00:09:47.017 --rc geninfo_unexecuted_blocks=1 00:09:47.017 00:09:47.017 ' 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.017 --rc genhtml_branch_coverage=1 00:09:47.017 --rc genhtml_function_coverage=1 00:09:47.017 --rc genhtml_legend=1 00:09:47.017 --rc geninfo_all_blocks=1 00:09:47.017 --rc geninfo_unexecuted_blocks=1 00:09:47.017 00:09:47.017 ' 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.017 --rc genhtml_branch_coverage=1 00:09:47.017 --rc genhtml_function_coverage=1 00:09:47.017 --rc genhtml_legend=1 00:09:47.017 --rc geninfo_all_blocks=1 00:09:47.017 --rc geninfo_unexecuted_blocks=1 00:09:47.017 00:09:47.017 ' 00:09:47.017 10:41:46 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.017 --rc genhtml_branch_coverage=1 00:09:47.017 --rc genhtml_function_coverage=1 00:09:47.017 --rc genhtml_legend=1 00:09:47.017 --rc geninfo_all_blocks=1 00:09:47.017 --rc geninfo_unexecuted_blocks=1 00:09:47.017 00:09:47.017 ' 00:09:47.017 10:41:46 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:47.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.276 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.276 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.276 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.276 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.537 10:41:47 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:47.537 10:41:47 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:47.537 10:41:47 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:47.537 10:41:47 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:47.537 10:41:47 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:47.537 10:41:47 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:47.537 10:41:47 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:47.537 10:41:47 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:47.538 10:41:47 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:47.538 10:41:47 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:47.538 10:41:47 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:47.538 10:41:47 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.060 Waiting for block devices as requested 00:09:48.060 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.060 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.060 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.322 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.596 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:53.596 10:41:53 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:53.596 10:41:53 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.596 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:53.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.857 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:54.118 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:54.379 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.379 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:54.379 10:41:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78499 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:54.379 10:41:54 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:54.379 10:41:54 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:54.379 10:41:54 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:54.379 10:41:54 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:54.379 10:41:54 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:54.379 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:54.640 Initializing NVMe Controllers 00:09:54.640 Attaching to 0000:00:10.0 00:09:54.640 Attaching to 0000:00:11.0 00:09:54.640 Attached to 0000:00:10.0 00:09:54.640 Attached to 0000:00:11.0 00:09:54.640 Initialization complete. Starting I/O... 00:09:54.640 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:54.640 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:54.640 00:09:55.582 QEMU NVMe Ctrl (12340 ): 2488 I/Os completed (+2488) 00:09:55.582 QEMU NVMe Ctrl (12341 ): 2494 I/Os completed (+2494) 00:09:55.582 00:09:56.523 QEMU NVMe Ctrl (12340 ): 5522 I/Os completed (+3034) 00:09:56.524 QEMU NVMe Ctrl (12341 ): 5558 I/Os completed (+3064) 00:09:56.524 00:09:57.896 QEMU NVMe Ctrl (12340 ): 9298 I/Os completed (+3776) 00:09:57.896 QEMU NVMe Ctrl (12341 ): 9823 I/Os completed (+4265) 00:09:57.896 00:09:58.829 QEMU NVMe Ctrl (12340 ): 13172 I/Os completed (+3874) 00:09:58.829 QEMU NVMe Ctrl (12341 ): 13872 I/Os completed (+4049) 00:09:58.829 00:09:59.760 QEMU NVMe Ctrl (12340 ): 17122 I/Os completed (+3950) 00:09:59.760 QEMU NVMe Ctrl (12341 ): 17935 I/Os completed (+4063) 00:09:59.760 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.693 [2024-12-16 10:42:00.322058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:00.693 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:00.693 [2024-12-16 10:42:00.323084] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.323122] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.323136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.323151] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.693 [2024-12-16 10:42:00.324352] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.324390] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.324402] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.324415] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.693 [2024-12-16 10:42:00.342780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:00.693 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:00.693 [2024-12-16 10:42:00.343730] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.343769] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.343785] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.343800] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.693 [2024-12-16 10:42:00.344828] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.344859] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.344875] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 [2024-12-16 10:42:00.344886] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.693 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:00.693 EAL: Scan for (pci) bus failed. 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:00.693 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:00.693 Attaching to 0000:00:10.0 00:10:00.693 Attached to 0000:00:10.0 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.693 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.693 Attaching to 0000:00:11.0 00:10:00.693 Attached to 0000:00:11.0 00:10:01.624 QEMU NVMe Ctrl (12340 ): 3567 I/Os completed (+3567) 00:10:01.624 QEMU NVMe Ctrl (12341 ): 3643 I/Os completed (+3643) 00:10:01.624 00:10:02.557 QEMU NVMe Ctrl (12340 ): 7398 I/Os completed (+3831) 00:10:02.557 QEMU NVMe Ctrl (12341 ): 7776 I/Os completed (+4133) 00:10:02.557 00:10:03.928 QEMU NVMe Ctrl (12340 ): 11147 I/Os completed (+3749) 00:10:03.928 QEMU NVMe Ctrl (12341 ): 11741 I/Os completed (+3965) 00:10:03.928 00:10:04.865 QEMU NVMe Ctrl (12340 ): 15353 I/Os completed (+4206) 00:10:04.866 QEMU NVMe Ctrl (12341 ): 15988 I/Os completed (+4247) 00:10:04.866 00:10:05.805 QEMU NVMe Ctrl (12340 ): 19666 I/Os completed (+4313) 00:10:05.805 QEMU NVMe Ctrl (12341 ): 20220 I/Os completed (+4232) 00:10:05.805 00:10:06.746 QEMU NVMe Ctrl (12340 ): 23276 I/Os completed (+3610) 00:10:06.746 QEMU NVMe Ctrl (12341 ): 24006 I/Os completed (+3786) 00:10:06.746 00:10:07.679 QEMU NVMe Ctrl (12340 ): 26611 I/Os completed (+3335) 00:10:07.679 QEMU NVMe Ctrl (12341 ): 27829 I/Os completed (+3823) 00:10:07.679 00:10:08.610 QEMU NVMe Ctrl (12340 ): 30408 I/Os completed (+3797) 00:10:08.610 QEMU NVMe Ctrl (12341 ): 32276 I/Os completed (+4447) 00:10:08.610 00:10:09.543 QEMU NVMe Ctrl (12340 ): 34575 I/Os completed (+4167) 00:10:09.543 QEMU NVMe Ctrl (12341 ): 36573 I/Os completed (+4297) 00:10:09.543 00:10:10.915 QEMU NVMe Ctrl (12340 ): 38890 I/Os completed (+4315) 00:10:10.915 QEMU NVMe Ctrl (12341 ): 40887 I/Os completed (+4314) 00:10:10.915 00:10:11.847 QEMU NVMe Ctrl (12340 ): 42872 I/Os completed (+3982) 00:10:11.847 QEMU NVMe Ctrl (12341 ): 45376 I/Os completed (+4489) 00:10:11.847 00:10:12.779 QEMU NVMe Ctrl (12340 ): 46578 I/Os completed (+3706) 00:10:12.779 QEMU NVMe Ctrl (12341 ): 49988 I/Os completed (+4612) 00:10:12.779 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.779 [2024-12-16 10:42:12.574383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:12.779 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:12.779 [2024-12-16 10:42:12.575381] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.575421] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.575436] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.575453] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.779 [2024-12-16 10:42:12.576880] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.576924] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.576969] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.576983] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.779 [2024-12-16 10:42:12.594454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:12.779 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:12.779 [2024-12-16 10:42:12.595345] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.595376] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.595390] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.595402] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.779 [2024-12-16 10:42:12.596388] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.596418] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.596434] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 [2024-12-16 10:42:12.596448] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.779 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:12.779 EAL: Scan for (pci) bus failed. 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.779 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.779 Attaching to 0000:00:10.0 00:10:12.779 Attached to 0000:00:10.0 00:10:13.037 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:13.037 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.037 10:42:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:13.037 Attaching to 0000:00:11.0 00:10:13.037 Attached to 0000:00:11.0 00:10:13.603 QEMU NVMe Ctrl (12340 ): 2809 I/Os completed (+2809) 00:10:13.603 QEMU NVMe Ctrl (12341 ): 2837 I/Os completed (+2837) 00:10:13.603 00:10:14.536 QEMU NVMe Ctrl (12340 ): 7012 I/Os completed (+4203) 00:10:14.536 QEMU NVMe Ctrl (12341 ): 7302 I/Os completed (+4465) 00:10:14.536 00:10:15.910 QEMU NVMe Ctrl (12340 ): 10888 I/Os completed (+3876) 00:10:15.910 QEMU NVMe Ctrl (12341 ): 11510 I/Os completed (+4208) 00:10:15.910 00:10:16.842 QEMU NVMe Ctrl (12340 ): 14749 I/Os completed (+3861) 00:10:16.842 QEMU NVMe Ctrl (12341 ): 15801 I/Os completed (+4291) 00:10:16.842 00:10:17.780 QEMU NVMe Ctrl (12340 ): 18704 I/Os completed (+3955) 00:10:17.780 QEMU NVMe Ctrl (12341 ): 20135 I/Os completed (+4334) 00:10:17.780 00:10:18.720 QEMU NVMe Ctrl (12340 ): 22726 I/Os completed (+4022) 00:10:18.720 QEMU NVMe Ctrl (12341 ): 24244 I/Os completed (+4109) 00:10:18.720 00:10:19.660 QEMU NVMe Ctrl (12340 ): 26437 I/Os completed (+3711) 00:10:19.660 QEMU NVMe Ctrl (12341 ): 28062 I/Os completed (+3818) 00:10:19.660 00:10:20.599 QEMU NVMe Ctrl (12340 ): 30087 I/Os completed (+3650) 00:10:20.599 QEMU NVMe Ctrl (12341 ): 31883 I/Os completed (+3821) 00:10:20.599 00:10:21.539 QEMU NVMe Ctrl (12340 ): 33872 I/Os completed (+3785) 00:10:21.539 QEMU NVMe Ctrl (12341 ): 35682 I/Os completed (+3799) 00:10:21.539 00:10:22.924 QEMU NVMe Ctrl (12340 ): 37579 I/Os completed (+3707) 00:10:22.924 QEMU NVMe Ctrl (12341 ): 39436 I/Os completed (+3754) 00:10:22.924 00:10:23.866 QEMU NVMe Ctrl (12340 ): 40987 I/Os completed (+3408) 00:10:23.866 QEMU NVMe Ctrl (12341 ): 42883 I/Os completed (+3447) 00:10:23.866 00:10:24.830 QEMU NVMe Ctrl (12340 ): 44134 I/Os completed (+3147) 00:10:24.830 QEMU NVMe Ctrl (12341 ): 46142 I/Os completed (+3259) 00:10:24.830 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.093 [2024-12-16 10:42:24.836283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:25.093 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:25.093 [2024-12-16 10:42:24.839555] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.839725] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.839764] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.839978] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:25.093 [2024-12-16 10:42:24.842004] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.842073] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.842089] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.842105] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.093 [2024-12-16 10:42:24.859881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:25.093 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:25.093 [2024-12-16 10:42:24.861183] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.861258] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.861278] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.861292] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:25.093 [2024-12-16 10:42:24.862543] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.862591] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.862610] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 [2024-12-16 10:42:24.862624] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.093 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:25.093 EAL: Scan for (pci) bus failed. 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.093 10:42:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.093 10:42:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:25.093 10:42:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.093 10:42:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.093 10:42:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.093 10:42:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:25.093 Attaching to 0000:00:10.0 00:10:25.093 Attached to 0000:00:10.0 00:10:25.354 10:42:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:25.354 10:42:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.354 10:42:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:25.354 Attaching to 0000:00:11.0 00:10:25.354 Attached to 0000:00:11.0 00:10:25.354 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:25.354 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:25.354 [2024-12-16 10:42:25.163239] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:37.671 10:42:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:37.671 10:42:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:37.671 10:42:37 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.84 00:10:37.671 10:42:37 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.84 00:10:37.671 10:42:37 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:37.671 10:42:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.84 00:10:37.671 10:42:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.84 2 00:10:37.671 remove_attach_helper took 42.84s to complete (handling 2 nvme drive(s)) 10:42:37 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:44.238 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78499 00:10:44.238 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78499) - No such process 00:10:44.238 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78499 00:10:44.238 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:44.238 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:44.238 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:44.238 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79049 00:10:44.238 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:44.238 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79049 00:10:44.239 10:42:43 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:44.239 10:42:43 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79049 ']' 00:10:44.239 10:42:43 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.239 10:42:43 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.239 10:42:43 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.239 10:42:43 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.239 10:42:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.239 [2024-12-16 10:42:43.244990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:44.239 [2024-12-16 10:42:43.245102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79049 ] 00:10:44.239 [2024-12-16 10:42:43.378590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.239 [2024-12-16 10:42:43.409370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:44.239 10:42:44 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:44.239 10:42:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.797 10:42:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.797 10:42:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.797 10:42:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.797 [2024-12-16 10:42:50.171263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:50.797 [2024-12-16 10:42:50.172566] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.797 [2024-12-16 10:42:50.172600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.797 [2024-12-16 10:42:50.172614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.797 [2024-12-16 10:42:50.172646] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.797 [2024-12-16 10:42:50.172655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.797 [2024-12-16 10:42:50.172663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.797 [2024-12-16 10:42:50.172674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.797 [2024-12-16 10:42:50.172681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.797 [2024-12-16 10:42:50.172689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.797 [2024-12-16 10:42:50.172696] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.797 [2024-12-16 10:42:50.172703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.797 [2024-12-16 10:42:50.172710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.797 [2024-12-16 10:42:50.571260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:50.797 [2024-12-16 10:42:50.572327] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.797 [2024-12-16 10:42:50.572359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.797 [2024-12-16 10:42:50.572369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.797 [2024-12-16 10:42:50.572381] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.797 [2024-12-16 10:42:50.572388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.797 [2024-12-16 10:42:50.572397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.797 [2024-12-16 10:42:50.572404] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.797 [2024-12-16 10:42:50.572412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.797 [2024-12-16 10:42:50.572419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.797 [2024-12-16 10:42:50.572430] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.797 [2024-12-16 10:42:50.572436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.797 [2024-12-16 10:42:50.572444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.797 10:42:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.797 10:42:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.797 10:42:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.797 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.056 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.056 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.056 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.056 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.056 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.056 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.056 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.056 10:42:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.261 10:43:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.261 10:43:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.261 10:43:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.261 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.262 [2024-12-16 10:43:02.971518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:03.262 [2024-12-16 10:43:02.973008] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.262 [2024-12-16 10:43:02.973124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.262 [2024-12-16 10:43:02.973220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.262 [2024-12-16 10:43:02.973338] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.262 [2024-12-16 10:43:02.973393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.262 [2024-12-16 10:43:02.973422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.262 [2024-12-16 10:43:02.973449] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.262 [2024-12-16 10:43:02.973506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.262 [2024-12-16 10:43:02.973535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.262 [2024-12-16 10:43:02.973560] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.262 [2024-12-16 10:43:02.973578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.262 [2024-12-16 10:43:02.973656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.262 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.262 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.262 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:03.262 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.262 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.262 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.262 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.262 10:43:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.262 10:43:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.262 10:43:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.262 10:43:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.262 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:03.262 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.829 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:03.829 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.829 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.829 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.829 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.829 10:43:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.829 10:43:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.829 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.829 10:43:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.829 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:03.829 10:43:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.829 [2024-12-16 10:43:03.571524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:03.829 [2024-12-16 10:43:03.572785] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.829 [2024-12-16 10:43:03.572896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.829 [2024-12-16 10:43:03.572979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.829 [2024-12-16 10:43:03.573115] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.829 [2024-12-16 10:43:03.573144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.829 [2024-12-16 10:43:03.573211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.829 [2024-12-16 10:43:03.573237] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.829 [2024-12-16 10:43:03.573282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.829 [2024-12-16 10:43:03.573345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.829 [2024-12-16 10:43:03.573375] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.829 [2024-12-16 10:43:03.573431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.829 [2024-12-16 10:43:03.573462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.087 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:04.087 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.087 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.087 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.087 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.087 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.087 10:43:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.087 10:43:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 10:43:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.345 10:43:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.554 10:43:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.554 10:43:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.554 10:43:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.554 [2024-12-16 10:43:16.371783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:16.554 [2024-12-16 10:43:16.374108] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.554 [2024-12-16 10:43:16.374218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.554 [2024-12-16 10:43:16.374286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.554 [2024-12-16 10:43:16.374318] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.554 [2024-12-16 10:43:16.374337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.554 [2024-12-16 10:43:16.374361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.554 [2024-12-16 10:43:16.374386] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.554 [2024-12-16 10:43:16.374402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.554 [2024-12-16 10:43:16.374523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.554 [2024-12-16 10:43:16.374552] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.554 [2024-12-16 10:43:16.374570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.554 [2024-12-16 10:43:16.374624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.554 10:43:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.554 10:43:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.554 10:43:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:16.554 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:17.121 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:17.121 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.121 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.121 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.121 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.121 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.121 10:43:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.121 10:43:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.121 10:43:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.121 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:17.121 10:43:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:17.121 [2024-12-16 10:43:16.971792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:17.121 [2024-12-16 10:43:16.973005] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.121 [2024-12-16 10:43:16.973037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.122 [2024-12-16 10:43:16.973047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.122 [2024-12-16 10:43:16.973059] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.122 [2024-12-16 10:43:16.973066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.122 [2024-12-16 10:43:16.973077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.122 [2024-12-16 10:43:16.973084] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.122 [2024-12-16 10:43:16.973093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.122 [2024-12-16 10:43:16.973099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.122 [2024-12-16 10:43:16.973109] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.122 [2024-12-16 10:43:16.973116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.122 [2024-12-16 10:43:16.973124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.688 10:43:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.688 10:43:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.688 10:43:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.688 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.946 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.946 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.946 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.946 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.946 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.946 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.946 10:43:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.72 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.72 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:11:30.171 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:30.171 10:43:29 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:30.171 10:43:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.794 10:43:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.794 10:43:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.794 10:43:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.794 10:43:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.794 [2024-12-16 10:43:35.922610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:36.794 [2024-12-16 10:43:35.923578] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.794 [2024-12-16 10:43:35.923681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.794 [2024-12-16 10:43:35.924376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.794 [2024-12-16 10:43:35.924640] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.794 [2024-12-16 10:43:35.924918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.794 [2024-12-16 10:43:35.925220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.794 [2024-12-16 10:43:35.925443] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.794 [2024-12-16 10:43:35.925605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.794 [2024-12-16 10:43:35.925820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.794 [2024-12-16 10:43:35.926011] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.794 [2024-12-16 10:43:35.926146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.794 [2024-12-16 10:43:35.926402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.794 [2024-12-16 10:43:36.322646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:36.794 [2024-12-16 10:43:36.324020] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.794 [2024-12-16 10:43:36.324066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.794 [2024-12-16 10:43:36.324079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.794 [2024-12-16 10:43:36.324096] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.794 [2024-12-16 10:43:36.324105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.794 [2024-12-16 10:43:36.324117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.794 [2024-12-16 10:43:36.324127] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.794 [2024-12-16 10:43:36.324139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.794 [2024-12-16 10:43:36.324148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.794 [2024-12-16 10:43:36.324159] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.794 [2024-12-16 10:43:36.324168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.794 [2024-12-16 10:43:36.324183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.794 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:36.794 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.794 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.795 10:43:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.795 10:43:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.795 10:43:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.795 10:43:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.033 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.033 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.033 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.033 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.033 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.034 10:43:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.034 10:43:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.034 10:43:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.034 10:43:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.034 10:43:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.034 10:43:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:49.034 10:43:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.034 [2024-12-16 10:43:48.822850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:49.034 [2024-12-16 10:43:48.823668] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.034 [2024-12-16 10:43:48.823696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.034 [2024-12-16 10:43:48.823710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.034 [2024-12-16 10:43:48.823724] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.034 [2024-12-16 10:43:48.823735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.034 [2024-12-16 10:43:48.823742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.034 [2024-12-16 10:43:48.823751] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.034 [2024-12-16 10:43:48.823758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.034 [2024-12-16 10:43:48.823766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.034 [2024-12-16 10:43:48.823773] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.034 [2024-12-16 10:43:48.823781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.034 [2024-12-16 10:43:48.823787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.296 [2024-12-16 10:43:49.222856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:49.296 [2024-12-16 10:43:49.223651] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.296 [2024-12-16 10:43:49.223687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.296 [2024-12-16 10:43:49.223698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.296 [2024-12-16 10:43:49.223713] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.296 [2024-12-16 10:43:49.223721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.296 [2024-12-16 10:43:49.223730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.296 [2024-12-16 10:43:49.223737] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.296 [2024-12-16 10:43:49.223745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.296 [2024-12-16 10:43:49.223751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.296 [2024-12-16 10:43:49.223759] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.296 [2024-12-16 10:43:49.223765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.296 [2024-12-16 10:43:49.223773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.557 10:43:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.557 10:43:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.557 10:43:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.557 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.817 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.817 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.817 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.817 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.817 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.817 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.817 10:43:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.025 10:44:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.025 10:44:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.025 10:44:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.025 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.025 [2024-12-16 10:44:01.723137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:02.025 [2024-12-16 10:44:01.723955] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.025 [2024-12-16 10:44:01.723977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.026 [2024-12-16 10:44:01.723989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.026 [2024-12-16 10:44:01.724001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.026 [2024-12-16 10:44:01.724012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.026 [2024-12-16 10:44:01.724018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.026 [2024-12-16 10:44:01.724026] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.026 [2024-12-16 10:44:01.724033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.026 [2024-12-16 10:44:01.724041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.026 [2024-12-16 10:44:01.724047] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.026 [2024-12-16 10:44:01.724054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.026 [2024-12-16 10:44:01.724061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.026 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:02.026 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.026 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.026 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.026 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.026 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.026 10:44:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.026 10:44:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.026 10:44:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.026 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:02.026 10:44:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:02.296 [2024-12-16 10:44:02.123155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:02.296 [2024-12-16 10:44:02.123979] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.296 [2024-12-16 10:44:02.124010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.296 [2024-12-16 10:44:02.124021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.296 [2024-12-16 10:44:02.124033] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.296 [2024-12-16 10:44:02.124040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.296 [2024-12-16 10:44:02.124048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.296 [2024-12-16 10:44:02.124054] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.296 [2024-12-16 10:44:02.124064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.296 [2024-12-16 10:44:02.124071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.296 [2024-12-16 10:44:02.124079] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.296 [2024-12-16 10:44:02.124085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.296 [2024-12-16 10:44:02.124092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.296 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:02.296 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.296 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.296 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.296 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.296 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.296 10:44:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.296 10:44:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.296 10:44:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.555 10:44:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.73 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.73 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.73 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.73 2 00:12:14.750 remove_attach_helper took 44.73s to complete (handling 2 nvme drive(s)) 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:14.750 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79049 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79049 ']' 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79049 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79049 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79049' 00:12:14.750 killing process with pid 79049 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79049 00:12:14.750 10:44:14 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79049 00:12:15.011 10:44:14 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:15.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:15.846 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:15.846 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:15.846 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:15.846 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:15.846 00:12:15.846 real 2m29.052s 00:12:15.846 user 1m49.128s 00:12:15.846 sys 0m18.436s 00:12:15.846 10:44:15 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.846 ************************************ 00:12:15.846 END TEST sw_hotplug 00:12:15.846 ************************************ 00:12:15.846 10:44:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.846 10:44:15 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:15.846 10:44:15 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:15.846 10:44:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:15.846 10:44:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.846 10:44:15 -- common/autotest_common.sh@10 -- # set +x 00:12:15.846 ************************************ 00:12:15.846 START TEST nvme_xnvme 00:12:15.846 ************************************ 00:12:15.846 10:44:15 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.107 * Looking for test storage... 00:12:16.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.107 10:44:15 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:16.107 10:44:15 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:16.107 10:44:15 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:16.107 10:44:15 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.107 10:44:15 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:16.108 10:44:15 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.108 10:44:15 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:16.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.108 --rc genhtml_branch_coverage=1 00:12:16.108 --rc genhtml_function_coverage=1 00:12:16.108 --rc genhtml_legend=1 00:12:16.108 --rc geninfo_all_blocks=1 00:12:16.108 --rc geninfo_unexecuted_blocks=1 00:12:16.108 00:12:16.108 ' 00:12:16.108 10:44:15 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:16.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.108 --rc genhtml_branch_coverage=1 00:12:16.108 --rc genhtml_function_coverage=1 00:12:16.108 --rc genhtml_legend=1 00:12:16.108 --rc geninfo_all_blocks=1 00:12:16.108 --rc geninfo_unexecuted_blocks=1 00:12:16.108 00:12:16.108 ' 00:12:16.108 10:44:15 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:16.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.108 --rc genhtml_branch_coverage=1 00:12:16.108 --rc genhtml_function_coverage=1 00:12:16.108 --rc genhtml_legend=1 00:12:16.108 --rc geninfo_all_blocks=1 00:12:16.108 --rc geninfo_unexecuted_blocks=1 00:12:16.108 00:12:16.108 ' 00:12:16.108 10:44:15 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:16.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.108 --rc genhtml_branch_coverage=1 00:12:16.108 --rc genhtml_function_coverage=1 00:12:16.108 --rc genhtml_legend=1 00:12:16.108 --rc geninfo_all_blocks=1 00:12:16.108 --rc geninfo_unexecuted_blocks=1 00:12:16.108 00:12:16.108 ' 00:12:16.108 10:44:15 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.108 10:44:15 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.108 10:44:15 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.108 10:44:15 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.108 10:44:15 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.108 10:44:15 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:16.108 10:44:15 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.108 10:44:15 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:16.108 10:44:15 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:16.108 10:44:15 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.108 10:44:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.108 ************************************ 00:12:16.108 START TEST xnvme_to_malloc_dd_copy 00:12:16.108 ************************************ 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:16.108 10:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:16.108 { 00:12:16.108 "subsystems": [ 00:12:16.108 { 00:12:16.108 "subsystem": "bdev", 00:12:16.108 "config": [ 00:12:16.108 { 00:12:16.108 "params": { 00:12:16.108 "block_size": 512, 00:12:16.108 "num_blocks": 2097152, 00:12:16.108 "name": "malloc0" 00:12:16.108 }, 00:12:16.108 "method": "bdev_malloc_create" 00:12:16.108 }, 00:12:16.108 { 00:12:16.108 "params": { 00:12:16.108 "io_mechanism": "libaio", 00:12:16.108 "filename": "/dev/nullb0", 00:12:16.108 "name": "null0" 00:12:16.108 }, 00:12:16.108 "method": "bdev_xnvme_create" 00:12:16.108 }, 00:12:16.108 { 00:12:16.108 "method": "bdev_wait_for_examine" 00:12:16.108 } 00:12:16.108 ] 00:12:16.108 } 00:12:16.108 ] 00:12:16.108 } 00:12:16.108 [2024-12-16 10:44:16.018112] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:16.108 [2024-12-16 10:44:16.018215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80420 ] 00:12:16.368 [2024-12-16 10:44:16.151281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.368 [2024-12-16 10:44:16.181680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.742  [2024-12-16T10:44:18.665Z] Copying: 302/1024 [MB] (302 MBps) [2024-12-16T10:44:19.601Z] Copying: 607/1024 [MB] (304 MBps) [2024-12-16T10:44:19.859Z] Copying: 911/1024 [MB] (303 MBps) [2024-12-16T10:44:20.425Z] Copying: 1024/1024 [MB] (average 303 MBps) 00:12:20.436 00:12:20.436 10:44:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:20.436 10:44:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:20.436 10:44:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:20.436 10:44:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:20.436 { 00:12:20.436 "subsystems": [ 00:12:20.436 { 00:12:20.436 "subsystem": "bdev", 00:12:20.436 "config": [ 00:12:20.436 { 00:12:20.436 "params": { 00:12:20.437 "block_size": 512, 00:12:20.437 "num_blocks": 2097152, 00:12:20.437 "name": "malloc0" 00:12:20.437 }, 00:12:20.437 "method": "bdev_malloc_create" 00:12:20.437 }, 00:12:20.437 { 00:12:20.437 "params": { 00:12:20.437 "io_mechanism": "libaio", 00:12:20.437 "filename": "/dev/nullb0", 00:12:20.437 "name": "null0" 00:12:20.437 }, 00:12:20.437 "method": "bdev_xnvme_create" 00:12:20.437 }, 00:12:20.437 { 00:12:20.437 "method": "bdev_wait_for_examine" 00:12:20.437 } 00:12:20.437 ] 00:12:20.437 } 00:12:20.437 ] 00:12:20.437 } 00:12:20.437 [2024-12-16 10:44:20.192622] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:20.437 [2024-12-16 10:44:20.192735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80474 ] 00:12:20.437 [2024-12-16 10:44:20.325973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.437 [2024-12-16 10:44:20.355997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.819  [2024-12-16T10:44:22.743Z] Copying: 242/1024 [MB] (242 MBps) [2024-12-16T10:44:23.679Z] Copying: 549/1024 [MB] (307 MBps) [2024-12-16T10:44:24.246Z] Copying: 856/1024 [MB] (306 MBps) [2024-12-16T10:44:24.505Z] Copying: 1024/1024 [MB] (average 288 MBps) 00:12:24.516 00:12:24.516 10:44:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:24.516 10:44:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:24.516 10:44:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:24.516 10:44:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:24.516 10:44:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:24.516 10:44:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.775 { 00:12:24.775 "subsystems": [ 00:12:24.775 { 00:12:24.775 "subsystem": "bdev", 00:12:24.775 "config": [ 00:12:24.775 { 00:12:24.775 "params": { 00:12:24.775 "block_size": 512, 00:12:24.775 "num_blocks": 2097152, 00:12:24.775 "name": "malloc0" 00:12:24.775 }, 00:12:24.775 "method": "bdev_malloc_create" 00:12:24.775 }, 00:12:24.775 { 00:12:24.775 "params": { 00:12:24.775 "io_mechanism": "io_uring", 00:12:24.775 "filename": "/dev/nullb0", 00:12:24.775 "name": "null0" 00:12:24.775 }, 00:12:24.775 "method": "bdev_xnvme_create" 00:12:24.775 }, 00:12:24.775 { 00:12:24.775 "method": "bdev_wait_for_examine" 00:12:24.775 } 00:12:24.775 ] 00:12:24.775 } 00:12:24.775 ] 00:12:24.775 } 00:12:24.775 [2024-12-16 10:44:24.537415] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:24.776 [2024-12-16 10:44:24.537521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80530 ] 00:12:24.776 [2024-12-16 10:44:24.668057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.776 [2024-12-16 10:44:24.706526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.149  [2024-12-16T10:44:27.073Z] Copying: 313/1024 [MB] (313 MBps) [2024-12-16T10:44:28.007Z] Copying: 626/1024 [MB] (313 MBps) [2024-12-16T10:44:28.265Z] Copying: 941/1024 [MB] (314 MBps) [2024-12-16T10:44:28.524Z] Copying: 1024/1024 [MB] (average 314 MBps) 00:12:28.535 00:12:28.535 10:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:28.535 10:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:28.535 10:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:28.535 10:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:28.794 { 00:12:28.794 "subsystems": [ 00:12:28.794 { 00:12:28.794 "subsystem": "bdev", 00:12:28.794 "config": [ 00:12:28.794 { 00:12:28.794 "params": { 00:12:28.794 "block_size": 512, 00:12:28.794 "num_blocks": 2097152, 00:12:28.794 "name": "malloc0" 00:12:28.794 }, 00:12:28.794 "method": "bdev_malloc_create" 00:12:28.794 }, 00:12:28.794 { 00:12:28.794 "params": { 00:12:28.794 "io_mechanism": "io_uring", 00:12:28.794 "filename": "/dev/nullb0", 00:12:28.794 "name": "null0" 00:12:28.794 }, 00:12:28.794 "method": "bdev_xnvme_create" 00:12:28.794 }, 00:12:28.794 { 00:12:28.794 "method": "bdev_wait_for_examine" 00:12:28.794 } 00:12:28.794 ] 00:12:28.794 } 00:12:28.794 ] 00:12:28.794 } 00:12:28.794 [2024-12-16 10:44:28.574520] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:28.794 [2024-12-16 10:44:28.574638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80579 ] 00:12:28.794 [2024-12-16 10:44:28.707721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.794 [2024-12-16 10:44:28.739967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.185  [2024-12-16T10:44:31.108Z] Copying: 317/1024 [MB] (317 MBps) [2024-12-16T10:44:32.042Z] Copying: 632/1024 [MB] (315 MBps) [2024-12-16T10:44:32.300Z] Copying: 951/1024 [MB] (318 MBps) [2024-12-16T10:44:32.562Z] Copying: 1024/1024 [MB] (average 317 MBps) 00:12:32.573 00:12:32.573 10:44:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:32.573 10:44:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:32.573 00:12:32.573 real 0m16.607s 00:12:32.573 user 0m13.850s 00:12:32.573 sys 0m2.260s 00:12:32.573 10:44:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.573 ************************************ 00:12:32.573 END TEST xnvme_to_malloc_dd_copy 00:12:32.573 ************************************ 00:12:32.573 10:44:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:32.832 10:44:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:32.832 10:44:32 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:32.832 10:44:32 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.832 10:44:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.832 ************************************ 00:12:32.832 START TEST xnvme_bdevperf 00:12:32.832 ************************************ 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:32.832 10:44:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:32.832 { 00:12:32.832 "subsystems": [ 00:12:32.832 { 00:12:32.832 "subsystem": "bdev", 00:12:32.832 "config": [ 00:12:32.832 { 00:12:32.832 "params": { 00:12:32.832 "io_mechanism": "libaio", 00:12:32.832 "filename": "/dev/nullb0", 00:12:32.832 "name": "null0" 00:12:32.832 }, 00:12:32.832 "method": "bdev_xnvme_create" 00:12:32.832 }, 00:12:32.832 { 00:12:32.832 "method": "bdev_wait_for_examine" 00:12:32.832 } 00:12:32.832 ] 00:12:32.832 } 00:12:32.832 ] 00:12:32.832 } 00:12:32.832 [2024-12-16 10:44:32.686608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:32.832 [2024-12-16 10:44:32.686698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80655 ] 00:12:32.832 [2024-12-16 10:44:32.811962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.091 [2024-12-16 10:44:32.843333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.091 Running I/O for 5 seconds... 00:12:34.961 206528.00 IOPS, 806.75 MiB/s [2024-12-16T10:44:36.322Z] 206624.00 IOPS, 807.12 MiB/s [2024-12-16T10:44:37.256Z] 206442.67 IOPS, 806.42 MiB/s [2024-12-16T10:44:38.192Z] 206592.00 IOPS, 807.00 MiB/s 00:12:38.203 Latency(us) 00:12:38.203 [2024-12-16T10:44:38.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.203 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:38.203 null0 : 5.00 206648.05 807.22 0.00 0.00 307.48 116.58 1537.58 00:12:38.203 [2024-12-16T10:44:38.192Z] =================================================================================================================== 00:12:38.203 [2024-12-16T10:44:38.192Z] Total : 206648.05 807.22 0.00 0.00 307.48 116.58 1537.58 00:12:38.203 10:44:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:38.203 10:44:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:38.203 10:44:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:38.203 10:44:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:38.203 10:44:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:38.203 10:44:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:38.203 { 00:12:38.203 "subsystems": [ 00:12:38.203 { 00:12:38.203 "subsystem": "bdev", 00:12:38.203 "config": [ 00:12:38.203 { 00:12:38.203 "params": { 00:12:38.203 "io_mechanism": "io_uring", 00:12:38.203 "filename": "/dev/nullb0", 00:12:38.203 "name": "null0" 00:12:38.203 }, 00:12:38.203 "method": "bdev_xnvme_create" 00:12:38.203 }, 00:12:38.203 { 00:12:38.203 "method": "bdev_wait_for_examine" 00:12:38.203 } 00:12:38.203 ] 00:12:38.203 } 00:12:38.203 ] 00:12:38.203 } 00:12:38.203 [2024-12-16 10:44:38.153320] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:38.203 [2024-12-16 10:44:38.153431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80726 ] 00:12:38.463 [2024-12-16 10:44:38.287788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.463 [2024-12-16 10:44:38.335181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.463 Running I/O for 5 seconds... 00:12:40.431 235008.00 IOPS, 918.00 MiB/s [2024-12-16T10:44:41.809Z] 234112.00 IOPS, 914.50 MiB/s [2024-12-16T10:44:42.743Z] 234261.33 IOPS, 915.08 MiB/s [2024-12-16T10:44:43.680Z] 234480.00 IOPS, 915.94 MiB/s [2024-12-16T10:44:43.680Z] 234329.60 IOPS, 915.35 MiB/s 00:12:43.691 Latency(us) 00:12:43.691 [2024-12-16T10:44:43.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.691 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.691 null0 : 5.00 234248.61 915.03 0.00 0.00 271.10 237.88 1594.29 00:12:43.691 [2024-12-16T10:44:43.680Z] =================================================================================================================== 00:12:43.691 [2024-12-16T10:44:43.680Z] Total : 234248.61 915.03 0.00 0.00 271.10 237.88 1594.29 00:12:43.691 10:44:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:43.691 10:44:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:43.691 00:12:43.691 real 0m10.985s 00:12:43.691 user 0m8.664s 00:12:43.691 sys 0m2.078s 00:12:43.691 10:44:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.691 ************************************ 00:12:43.691 END TEST xnvme_bdevperf 00:12:43.691 ************************************ 00:12:43.691 10:44:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.691 ************************************ 00:12:43.691 END TEST nvme_xnvme 00:12:43.691 ************************************ 00:12:43.691 00:12:43.691 real 0m27.847s 00:12:43.691 user 0m22.628s 00:12:43.691 sys 0m4.449s 00:12:43.691 10:44:43 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.691 10:44:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.953 10:44:43 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:43.953 10:44:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:43.953 10:44:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.954 10:44:43 -- common/autotest_common.sh@10 -- # set +x 00:12:43.954 ************************************ 00:12:43.954 START TEST blockdev_xnvme 00:12:43.954 ************************************ 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:43.954 * Looking for test storage... 00:12:43.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.954 10:44:43 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:43.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.954 --rc genhtml_branch_coverage=1 00:12:43.954 --rc genhtml_function_coverage=1 00:12:43.954 --rc genhtml_legend=1 00:12:43.954 --rc geninfo_all_blocks=1 00:12:43.954 --rc geninfo_unexecuted_blocks=1 00:12:43.954 00:12:43.954 ' 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:43.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.954 --rc genhtml_branch_coverage=1 00:12:43.954 --rc genhtml_function_coverage=1 00:12:43.954 --rc genhtml_legend=1 00:12:43.954 --rc geninfo_all_blocks=1 00:12:43.954 --rc geninfo_unexecuted_blocks=1 00:12:43.954 00:12:43.954 ' 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:43.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.954 --rc genhtml_branch_coverage=1 00:12:43.954 --rc genhtml_function_coverage=1 00:12:43.954 --rc genhtml_legend=1 00:12:43.954 --rc geninfo_all_blocks=1 00:12:43.954 --rc geninfo_unexecuted_blocks=1 00:12:43.954 00:12:43.954 ' 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:43.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.954 --rc genhtml_branch_coverage=1 00:12:43.954 --rc genhtml_function_coverage=1 00:12:43.954 --rc genhtml_legend=1 00:12:43.954 --rc geninfo_all_blocks=1 00:12:43.954 --rc geninfo_unexecuted_blocks=1 00:12:43.954 00:12:43.954 ' 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80862 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80862 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 80862 ']' 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.954 10:44:43 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:43.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.954 10:44:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.954 [2024-12-16 10:44:43.920143] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:43.954 [2024-12-16 10:44:43.920264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80862 ] 00:12:44.215 [2024-12-16 10:44:44.054326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.215 [2024-12-16 10:44:44.088349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.787 10:44:44 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.787 10:44:44 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:44.787 10:44:44 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:44.787 10:44:44 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:44.787 10:44:44 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:44.787 10:44:44 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:44.787 10:44:44 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:45.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:45.358 Waiting for block devices as requested 00:12:45.358 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.619 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.619 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.619 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.899 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:50.899 nvme0n1 00:12:50.899 nvme1n1 00:12:50.899 nvme2n1 00:12:50.899 nvme2n2 00:12:50.899 nvme2n3 00:12:50.899 nvme3n1 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.899 10:44:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:50.899 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:50.900 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e91036ad-3291-4287-a8e8-d8d059139e6e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e91036ad-3291-4287-a8e8-d8d059139e6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "783c3c11-61d1-477e-bc1f-71c6d337f22d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "783c3c11-61d1-477e-bc1f-71c6d337f22d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7f551306-5084-4956-bfd0-ac0c3540cbc0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f551306-5084-4956-bfd0-ac0c3540cbc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6c3e87cc-e182-46c9-9806-f1560b0d2130"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c3e87cc-e182-46c9-9806-f1560b0d2130",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a437a992-f7e9-4018-9675-e7f0e28ab831"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a437a992-f7e9-4018-9675-e7f0e28ab831",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3d4247a6-2c35-46e7-bb72-1db3aeb6b127"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3d4247a6-2c35-46e7-bb72-1db3aeb6b127",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:50.900 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:50.900 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:50.900 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:50.900 10:44:50 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80862 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 80862 ']' 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 80862 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80862 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.900 killing process with pid 80862 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80862' 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 80862 00:12:50.900 10:44:50 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 80862 00:12:51.161 10:44:51 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:51.161 10:44:51 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:51.161 10:44:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:51.161 10:44:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.161 10:44:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.161 ************************************ 00:12:51.161 START TEST bdev_hello_world 00:12:51.161 ************************************ 00:12:51.161 10:44:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:51.161 [2024-12-16 10:44:51.094060] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:51.161 [2024-12-16 10:44:51.094171] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81204 ] 00:12:51.422 [2024-12-16 10:44:51.230169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.422 [2024-12-16 10:44:51.263192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.683 [2024-12-16 10:44:51.431393] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:51.683 [2024-12-16 10:44:51.431442] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:51.683 [2024-12-16 10:44:51.431460] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:51.683 [2024-12-16 10:44:51.433495] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:51.683 [2024-12-16 10:44:51.434137] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:51.683 [2024-12-16 10:44:51.434174] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:51.683 [2024-12-16 10:44:51.434687] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:51.683 00:12:51.683 [2024-12-16 10:44:51.434729] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:51.683 00:12:51.683 real 0m0.548s 00:12:51.683 user 0m0.289s 00:12:51.683 sys 0m0.140s 00:12:51.683 10:44:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.683 10:44:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:51.683 ************************************ 00:12:51.683 END TEST bdev_hello_world 00:12:51.683 ************************************ 00:12:51.683 10:44:51 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:51.683 10:44:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:51.683 10:44:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.683 10:44:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.683 ************************************ 00:12:51.683 START TEST bdev_bounds 00:12:51.683 ************************************ 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81235 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:51.683 Process bdevio pid: 81235 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81235' 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81235 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81235 ']' 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.683 10:44:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:51.943 [2024-12-16 10:44:51.708552] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:51.943 [2024-12-16 10:44:51.708667] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81235 ] 00:12:51.943 [2024-12-16 10:44:51.845415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:51.943 [2024-12-16 10:44:51.881496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.943 [2024-12-16 10:44:51.882058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.943 [2024-12-16 10:44:51.882159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.884 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.884 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:52.884 10:44:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:52.884 I/O targets: 00:12:52.884 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:52.884 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:52.884 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:52.884 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:52.884 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:52.884 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:52.884 00:12:52.884 00:12:52.884 CUnit - A unit testing framework for C - Version 2.1-3 00:12:52.884 http://cunit.sourceforge.net/ 00:12:52.884 00:12:52.884 00:12:52.884 Suite: bdevio tests on: nvme3n1 00:12:52.884 Test: blockdev write read block ...passed 00:12:52.884 Test: blockdev write zeroes read block ...passed 00:12:52.884 Test: blockdev write zeroes read no split ...passed 00:12:52.884 Test: blockdev write zeroes read split ...passed 00:12:52.884 Test: blockdev write zeroes read split partial ...passed 00:12:52.884 Test: blockdev reset ...passed 00:12:52.884 Test: blockdev write read 8 blocks ...passed 00:12:52.884 Test: blockdev write read size > 128k ...passed 00:12:52.884 Test: blockdev write read invalid size ...passed 00:12:52.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:52.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:52.884 Test: blockdev write read max offset ...passed 00:12:52.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:52.884 Test: blockdev writev readv 8 blocks ...passed 00:12:52.884 Test: blockdev writev readv 30 x 1block ...passed 00:12:52.884 Test: blockdev writev readv block ...passed 00:12:52.884 Test: blockdev writev readv size > 128k ...passed 00:12:52.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:52.884 Test: blockdev comparev and writev ...passed 00:12:52.884 Test: blockdev nvme passthru rw ...passed 00:12:52.884 Test: blockdev nvme passthru vendor specific ...passed 00:12:52.884 Test: blockdev nvme admin passthru ...passed 00:12:52.884 Test: blockdev copy ...passed 00:12:52.884 Suite: bdevio tests on: nvme2n3 00:12:52.884 Test: blockdev write read block ...passed 00:12:52.885 Test: blockdev write zeroes read block ...passed 00:12:52.885 Test: blockdev write zeroes read no split ...passed 00:12:52.885 Test: blockdev write zeroes read split ...passed 00:12:52.885 Test: blockdev write zeroes read split partial ...passed 00:12:52.885 Test: blockdev reset ...passed 00:12:52.885 Test: blockdev write read 8 blocks ...passed 00:12:52.885 Test: blockdev write read size > 128k ...passed 00:12:52.885 Test: blockdev write read invalid size ...passed 00:12:52.885 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:52.885 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:52.885 Test: blockdev write read max offset ...passed 00:12:52.885 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:52.885 Test: blockdev writev readv 8 blocks ...passed 00:12:52.885 Test: blockdev writev readv 30 x 1block ...passed 00:12:52.885 Test: blockdev writev readv block ...passed 00:12:52.885 Test: blockdev writev readv size > 128k ...passed 00:12:52.885 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:52.885 Test: blockdev comparev and writev ...passed 00:12:52.885 Test: blockdev nvme passthru rw ...passed 00:12:52.885 Test: blockdev nvme passthru vendor specific ...passed 00:12:52.885 Test: blockdev nvme admin passthru ...passed 00:12:52.885 Test: blockdev copy ...passed 00:12:52.885 Suite: bdevio tests on: nvme2n2 00:12:52.885 Test: blockdev write read block ...passed 00:12:52.885 Test: blockdev write zeroes read block ...passed 00:12:52.885 Test: blockdev write zeroes read no split ...passed 00:12:52.885 Test: blockdev write zeroes read split ...passed 00:12:52.885 Test: blockdev write zeroes read split partial ...passed 00:12:52.885 Test: blockdev reset ...passed 00:12:52.885 Test: blockdev write read 8 blocks ...passed 00:12:52.885 Test: blockdev write read size > 128k ...passed 00:12:52.885 Test: blockdev write read invalid size ...passed 00:12:52.885 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:52.885 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:52.885 Test: blockdev write read max offset ...passed 00:12:52.885 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:52.885 Test: blockdev writev readv 8 blocks ...passed 00:12:52.885 Test: blockdev writev readv 30 x 1block ...passed 00:12:52.885 Test: blockdev writev readv block ...passed 00:12:52.885 Test: blockdev writev readv size > 128k ...passed 00:12:52.885 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:52.885 Test: blockdev comparev and writev ...passed 00:12:52.885 Test: blockdev nvme passthru rw ...passed 00:12:52.885 Test: blockdev nvme passthru vendor specific ...passed 00:12:52.885 Test: blockdev nvme admin passthru ...passed 00:12:52.885 Test: blockdev copy ...passed 00:12:52.885 Suite: bdevio tests on: nvme2n1 00:12:52.885 Test: blockdev write read block ...passed 00:12:52.885 Test: blockdev write zeroes read block ...passed 00:12:52.885 Test: blockdev write zeroes read no split ...passed 00:12:52.885 Test: blockdev write zeroes read split ...passed 00:12:52.885 Test: blockdev write zeroes read split partial ...passed 00:12:52.885 Test: blockdev reset ...passed 00:12:52.885 Test: blockdev write read 8 blocks ...passed 00:12:52.885 Test: blockdev write read size > 128k ...passed 00:12:52.885 Test: blockdev write read invalid size ...passed 00:12:52.885 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:52.885 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:52.885 Test: blockdev write read max offset ...passed 00:12:52.885 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:52.885 Test: blockdev writev readv 8 blocks ...passed 00:12:52.885 Test: blockdev writev readv 30 x 1block ...passed 00:12:52.885 Test: blockdev writev readv block ...passed 00:12:52.885 Test: blockdev writev readv size > 128k ...passed 00:12:52.885 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:52.885 Test: blockdev comparev and writev ...passed 00:12:52.885 Test: blockdev nvme passthru rw ...passed 00:12:52.885 Test: blockdev nvme passthru vendor specific ...passed 00:12:52.885 Test: blockdev nvme admin passthru ...passed 00:12:52.885 Test: blockdev copy ...passed 00:12:52.885 Suite: bdevio tests on: nvme1n1 00:12:52.885 Test: blockdev write read block ...passed 00:12:52.885 Test: blockdev write zeroes read block ...passed 00:12:52.885 Test: blockdev write zeroes read no split ...passed 00:12:53.147 Test: blockdev write zeroes read split ...passed 00:12:53.147 Test: blockdev write zeroes read split partial ...passed 00:12:53.147 Test: blockdev reset ...passed 00:12:53.147 Test: blockdev write read 8 blocks ...passed 00:12:53.147 Test: blockdev write read size > 128k ...passed 00:12:53.147 Test: blockdev write read invalid size ...passed 00:12:53.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.147 Test: blockdev write read max offset ...passed 00:12:53.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.147 Test: blockdev writev readv 8 blocks ...passed 00:12:53.147 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.147 Test: blockdev writev readv block ...passed 00:12:53.147 Test: blockdev writev readv size > 128k ...passed 00:12:53.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.147 Test: blockdev comparev and writev ...passed 00:12:53.147 Test: blockdev nvme passthru rw ...passed 00:12:53.147 Test: blockdev nvme passthru vendor specific ...passed 00:12:53.147 Test: blockdev nvme admin passthru ...passed 00:12:53.147 Test: blockdev copy ...passed 00:12:53.147 Suite: bdevio tests on: nvme0n1 00:12:53.147 Test: blockdev write read block ...passed 00:12:53.147 Test: blockdev write zeroes read block ...passed 00:12:53.147 Test: blockdev write zeroes read no split ...passed 00:12:53.147 Test: blockdev write zeroes read split ...passed 00:12:53.147 Test: blockdev write zeroes read split partial ...passed 00:12:53.147 Test: blockdev reset ...passed 00:12:53.147 Test: blockdev write read 8 blocks ...passed 00:12:53.147 Test: blockdev write read size > 128k ...passed 00:12:53.147 Test: blockdev write read invalid size ...passed 00:12:53.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.147 Test: blockdev write read max offset ...passed 00:12:53.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.147 Test: blockdev writev readv 8 blocks ...passed 00:12:53.147 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.147 Test: blockdev writev readv block ...passed 00:12:53.147 Test: blockdev writev readv size > 128k ...passed 00:12:53.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.147 Test: blockdev comparev and writev ...passed 00:12:53.147 Test: blockdev nvme passthru rw ...passed 00:12:53.147 Test: blockdev nvme passthru vendor specific ...passed 00:12:53.147 Test: blockdev nvme admin passthru ...passed 00:12:53.147 Test: blockdev copy ...passed 00:12:53.147 00:12:53.147 Run Summary: Type Total Ran Passed Failed Inactive 00:12:53.147 suites 6 6 n/a 0 0 00:12:53.147 tests 138 138 138 0 0 00:12:53.147 asserts 780 780 780 0 n/a 00:12:53.147 00:12:53.147 Elapsed time = 0.482 seconds 00:12:53.147 0 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81235 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81235 ']' 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81235 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81235 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81235' 00:12:53.147 killing process with pid 81235 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81235 00:12:53.147 10:44:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81235 00:12:53.408 10:44:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:53.408 00:12:53.408 real 0m1.489s 00:12:53.408 user 0m3.810s 00:12:53.408 sys 0m0.270s 00:12:53.408 10:44:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.408 10:44:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:53.408 ************************************ 00:12:53.408 END TEST bdev_bounds 00:12:53.408 ************************************ 00:12:53.408 10:44:53 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:53.408 10:44:53 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:53.408 10:44:53 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.408 10:44:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.408 ************************************ 00:12:53.408 START TEST bdev_nbd 00:12:53.408 ************************************ 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81280 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81280 /var/tmp/spdk-nbd.sock 00:12:53.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81280 ']' 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:53.408 10:44:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:53.408 [2024-12-16 10:44:53.268200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:53.408 [2024-12-16 10:44:53.268407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.699 [2024-12-16 10:44:53.404993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.699 [2024-12-16 10:44:53.438847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:54.287 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.548 1+0 records in 00:12:54.548 1+0 records out 00:12:54.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00094678 s, 4.3 MB/s 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.548 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.549 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.549 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.549 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:54.549 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:54.549 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.809 1+0 records in 00:12:54.809 1+0 records out 00:12:54.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130177 s, 3.1 MB/s 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:54.809 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:54.810 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.071 1+0 records in 00:12:55.071 1+0 records out 00:12:55.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588792 s, 7.0 MB/s 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:55.071 10:44:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.342 1+0 records in 00:12:55.342 1+0 records out 00:12:55.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119481 s, 3.4 MB/s 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:55.342 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.601 1+0 records in 00:12:55.601 1+0 records out 00:12:55.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453109 s, 9.0 MB/s 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:55.601 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.859 1+0 records in 00:12:55.859 1+0 records out 00:12:55.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112996 s, 3.6 MB/s 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd0", 00:12:55.859 "bdev_name": "nvme0n1" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd1", 00:12:55.859 "bdev_name": "nvme1n1" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd2", 00:12:55.859 "bdev_name": "nvme2n1" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd3", 00:12:55.859 "bdev_name": "nvme2n2" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd4", 00:12:55.859 "bdev_name": "nvme2n3" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd5", 00:12:55.859 "bdev_name": "nvme3n1" 00:12:55.859 } 00:12:55.859 ]' 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:55.859 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd0", 00:12:55.859 "bdev_name": "nvme0n1" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd1", 00:12:55.859 "bdev_name": "nvme1n1" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd2", 00:12:55.859 "bdev_name": "nvme2n1" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd3", 00:12:55.859 "bdev_name": "nvme2n2" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd4", 00:12:55.859 "bdev_name": "nvme2n3" 00:12:55.859 }, 00:12:55.859 { 00:12:55.859 "nbd_device": "/dev/nbd5", 00:12:55.859 "bdev_name": "nvme3n1" 00:12:55.859 } 00:12:55.859 ]' 00:12:56.118 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:56.118 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.118 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:56.118 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:56.118 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:56.118 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.118 10:44:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.118 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.378 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:56.639 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.640 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.900 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.161 10:44:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:57.423 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:57.684 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:57.684 /dev/nbd0 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.946 1+0 records in 00:12:57.946 1+0 records out 00:12:57.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968221 s, 4.2 MB/s 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:57.946 /dev/nbd1 00:12:57.946 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.209 1+0 records in 00:12:58.209 1+0 records out 00:12:58.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000936377 s, 4.4 MB/s 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.209 10:44:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:58.209 /dev/nbd10 00:12:58.209 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:58.209 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:58.209 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:58.209 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:58.209 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.209 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.209 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.472 1+0 records in 00:12:58.472 1+0 records out 00:12:58.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124897 s, 3.3 MB/s 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:58.472 /dev/nbd11 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.472 1+0 records in 00:12:58.472 1+0 records out 00:12:58.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00155425 s, 2.6 MB/s 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.472 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:58.734 /dev/nbd12 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.734 1+0 records in 00:12:58.734 1+0 records out 00:12:58.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996151 s, 4.1 MB/s 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.734 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:58.996 /dev/nbd13 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.996 1+0 records in 00:12:58.996 1+0 records out 00:12:58.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112746 s, 3.6 MB/s 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.996 10:44:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd0", 00:12:59.257 "bdev_name": "nvme0n1" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd1", 00:12:59.257 "bdev_name": "nvme1n1" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd10", 00:12:59.257 "bdev_name": "nvme2n1" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd11", 00:12:59.257 "bdev_name": "nvme2n2" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd12", 00:12:59.257 "bdev_name": "nvme2n3" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd13", 00:12:59.257 "bdev_name": "nvme3n1" 00:12:59.257 } 00:12:59.257 ]' 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd0", 00:12:59.257 "bdev_name": "nvme0n1" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd1", 00:12:59.257 "bdev_name": "nvme1n1" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd10", 00:12:59.257 "bdev_name": "nvme2n1" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd11", 00:12:59.257 "bdev_name": "nvme2n2" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd12", 00:12:59.257 "bdev_name": "nvme2n3" 00:12:59.257 }, 00:12:59.257 { 00:12:59.257 "nbd_device": "/dev/nbd13", 00:12:59.257 "bdev_name": "nvme3n1" 00:12:59.257 } 00:12:59.257 ]' 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:59.257 /dev/nbd1 00:12:59.257 /dev/nbd10 00:12:59.257 /dev/nbd11 00:12:59.257 /dev/nbd12 00:12:59.257 /dev/nbd13' 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:59.257 /dev/nbd1 00:12:59.257 /dev/nbd10 00:12:59.257 /dev/nbd11 00:12:59.257 /dev/nbd12 00:12:59.257 /dev/nbd13' 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:59.257 256+0 records in 00:12:59.257 256+0 records out 00:12:59.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00806017 s, 130 MB/s 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.257 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:59.519 256+0 records in 00:12:59.519 256+0 records out 00:12:59.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251945 s, 4.2 MB/s 00:12:59.519 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.519 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:00.093 256+0 records in 00:13:00.093 256+0 records out 00:13:00.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.328534 s, 3.2 MB/s 00:13:00.093 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:00.093 10:44:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:00.093 256+0 records in 00:13:00.093 256+0 records out 00:13:00.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192012 s, 5.5 MB/s 00:13:00.093 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:00.093 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:00.355 256+0 records in 00:13:00.355 256+0 records out 00:13:00.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24577 s, 4.3 MB/s 00:13:00.355 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:00.355 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:00.617 256+0 records in 00:13:00.617 256+0 records out 00:13:00.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237309 s, 4.4 MB/s 00:13:00.617 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:00.617 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:00.880 256+0 records in 00:13:00.880 256+0 records out 00:13:00.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224569 s, 4.7 MB/s 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.880 10:45:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.142 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:01.403 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.404 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.665 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.926 10:45:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.187 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:02.448 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:02.709 malloc_lvol_verify 00:13:02.709 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:02.970 90e695a1-574e-40a9-b44f-b2d3a1f57423 00:13:02.970 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:02.970 c591ba7d-aec6-4852-978b-c57e7a7fc770 00:13:02.970 10:45:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:03.232 /dev/nbd0 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:03.232 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:13:03.232  done 00:13:03.232 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:03.232 00:13:03.232 Allocating group tables: 0/1 done 00:13:03.232 Writing inode tables: 0/1 done 00:13:03.232 Creating journal (1024 blocks): done 00:13:03.232 Writing superblocks and filesystem accounting information: 0/1 done 00:13:03.232 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.232 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81280 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81280 ']' 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81280 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81280 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:03.493 killing process with pid 81280 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81280' 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81280 00:13:03.493 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81280 00:13:03.755 10:45:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:03.755 00:13:03.755 real 0m10.381s 00:13:03.755 user 0m14.191s 00:13:03.755 sys 0m3.663s 00:13:03.755 ************************************ 00:13:03.755 END TEST bdev_nbd 00:13:03.755 ************************************ 00:13:03.755 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.755 10:45:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:03.755 10:45:03 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:03.755 10:45:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:03.755 10:45:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:03.755 10:45:03 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:03.755 10:45:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:03.755 10:45:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.755 10:45:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.755 ************************************ 00:13:03.755 START TEST bdev_fio 00:13:03.755 ************************************ 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:03.755 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:03.755 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:03.756 ************************************ 00:13:03.756 START TEST bdev_fio_rw_verify 00:13:03.756 ************************************ 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:03.756 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:04.017 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:04.017 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:04.017 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:04.017 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:04.017 10:45:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:04.017 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.017 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.017 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.017 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.017 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.017 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:04.017 fio-3.35 00:13:04.017 Starting 6 threads 00:13:16.302 00:13:16.302 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81685: Mon Dec 16 10:45:14 2024 00:13:16.302 read: IOPS=12.5k, BW=48.7MiB/s (51.1MB/s)(487MiB/10002msec) 00:13:16.302 slat (usec): min=2, max=2600, avg= 6.82, stdev=15.74 00:13:16.302 clat (usec): min=102, max=11499, avg=1625.66, stdev=862.74 00:13:16.302 lat (usec): min=106, max=11528, avg=1632.48, stdev=863.34 00:13:16.302 clat percentiles (usec): 00:13:16.302 | 50.000th=[ 1516], 99.000th=[ 4293], 99.900th=[ 6128], 99.990th=[ 9896], 00:13:16.302 | 99.999th=[11469] 00:13:16.302 write: IOPS=12.7k, BW=49.7MiB/s (52.1MB/s)(497MiB/10002msec); 0 zone resets 00:13:16.302 slat (usec): min=12, max=4158, avg=42.58, stdev=149.69 00:13:16.302 clat (usec): min=89, max=9183, avg=1831.36, stdev=901.52 00:13:16.302 lat (usec): min=103, max=9210, avg=1873.94, stdev=913.49 00:13:16.302 clat percentiles (usec): 00:13:16.302 | 50.000th=[ 1696], 99.000th=[ 4621], 99.900th=[ 6325], 99.990th=[ 7701], 00:13:16.302 | 99.999th=[ 9110] 00:13:16.302 bw ( KiB/s): min=39850, max=77557, per=100.00%, avg=51018.63, stdev=1433.77, samples=114 00:13:16.302 iops : min= 9961, max=19389, avg=12754.05, stdev=358.46, samples=114 00:13:16.302 lat (usec) : 100=0.01%, 250=1.02%, 500=3.79%, 750=5.78%, 1000=9.31% 00:13:16.302 lat (msec) : 2=48.18%, 4=30.00%, 10=1.91%, 20=0.01% 00:13:16.302 cpu : usr=46.22%, sys=30.65%, ctx=5041, majf=0, minf=15099 00:13:16.302 IO depths : 1=11.6%, 2=24.1%, 4=50.9%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.302 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.302 issued rwts: total=124787,127170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.302 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:16.302 00:13:16.302 Run status group 0 (all jobs): 00:13:16.302 READ: bw=48.7MiB/s (51.1MB/s), 48.7MiB/s-48.7MiB/s (51.1MB/s-51.1MB/s), io=487MiB (511MB), run=10002-10002msec 00:13:16.302 WRITE: bw=49.7MiB/s (52.1MB/s), 49.7MiB/s-49.7MiB/s (52.1MB/s-52.1MB/s), io=497MiB (521MB), run=10002-10002msec 00:13:16.302 ----------------------------------------------------- 00:13:16.302 Suppressions used: 00:13:16.302 count bytes template 00:13:16.302 6 48 /usr/src/fio/parse.c 00:13:16.302 2306 221376 /usr/src/fio/iolog.c 00:13:16.302 1 8 libtcmalloc_minimal.so 00:13:16.302 1 904 libcrypto.so 00:13:16.302 ----------------------------------------------------- 00:13:16.302 00:13:16.302 00:13:16.302 real 0m11.121s 00:13:16.302 user 0m28.435s 00:13:16.302 sys 0m18.685s 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.302 ************************************ 00:13:16.302 END TEST bdev_fio_rw_verify 00:13:16.302 ************************************ 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:16.302 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e91036ad-3291-4287-a8e8-d8d059139e6e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e91036ad-3291-4287-a8e8-d8d059139e6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "783c3c11-61d1-477e-bc1f-71c6d337f22d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "783c3c11-61d1-477e-bc1f-71c6d337f22d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7f551306-5084-4956-bfd0-ac0c3540cbc0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f551306-5084-4956-bfd0-ac0c3540cbc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6c3e87cc-e182-46c9-9806-f1560b0d2130"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c3e87cc-e182-46c9-9806-f1560b0d2130",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a437a992-f7e9-4018-9675-e7f0e28ab831"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a437a992-f7e9-4018-9675-e7f0e28ab831",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3d4247a6-2c35-46e7-bb72-1db3aeb6b127"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3d4247a6-2c35-46e7-bb72-1db3aeb6b127",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.303 /home/vagrant/spdk_repo/spdk 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:16.303 00:13:16.303 real 0m11.299s 00:13:16.303 user 0m28.511s 00:13:16.303 sys 0m18.765s 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.303 ************************************ 00:13:16.303 END TEST bdev_fio 00:13:16.303 ************************************ 00:13:16.303 10:45:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:16.303 10:45:14 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:16.303 10:45:14 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:16.303 10:45:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:16.303 10:45:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.303 10:45:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.303 ************************************ 00:13:16.303 START TEST bdev_verify 00:13:16.303 ************************************ 00:13:16.303 10:45:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:16.303 [2024-12-16 10:45:15.085291] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:16.303 [2024-12-16 10:45:15.085430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81854 ] 00:13:16.303 [2024-12-16 10:45:15.223226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:16.303 [2024-12-16 10:45:15.286121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.303 [2024-12-16 10:45:15.286317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.303 Running I/O for 5 seconds... 00:13:17.817 22912.00 IOPS, 89.50 MiB/s [2024-12-16T10:45:19.192Z] 23696.00 IOPS, 92.56 MiB/s [2024-12-16T10:45:19.764Z] 23786.67 IOPS, 92.92 MiB/s [2024-12-16T10:45:20.708Z] 23440.00 IOPS, 91.56 MiB/s 00:13:20.720 Latency(us) 00:13:20.720 [2024-12-16T10:45:20.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.720 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x0 length 0xa0000 00:13:20.720 nvme0n1 : 5.05 1925.18 7.52 0.00 0.00 66358.20 5444.53 70980.53 00:13:20.720 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0xa0000 length 0xa0000 00:13:20.720 nvme0n1 : 5.02 1759.18 6.87 0.00 0.00 72634.14 12351.02 69367.34 00:13:20.720 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x0 length 0xbd0bd 00:13:20.720 nvme1n1 : 5.05 2342.66 9.15 0.00 0.00 54368.63 5772.21 61704.66 00:13:20.720 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:20.720 nvme1n1 : 5.05 2225.17 8.69 0.00 0.00 57205.84 6755.25 65737.65 00:13:20.720 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x0 length 0x80000 00:13:20.720 nvme2n1 : 5.06 1973.11 7.71 0.00 0.00 64436.01 8922.98 63721.16 00:13:20.720 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x80000 length 0x80000 00:13:20.720 nvme2n1 : 5.03 1831.36 7.15 0.00 0.00 69548.98 5797.42 73803.62 00:13:20.720 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x0 length 0x80000 00:13:20.720 nvme2n2 : 5.06 1921.95 7.51 0.00 0.00 66047.23 10284.11 63721.16 00:13:20.720 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x80000 length 0x80000 00:13:20.720 nvme2n2 : 5.03 1754.53 6.85 0.00 0.00 72519.22 11695.66 68964.04 00:13:20.720 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x0 length 0x80000 00:13:20.720 nvme2n3 : 5.06 1921.41 7.51 0.00 0.00 65947.16 6049.48 70980.53 00:13:20.720 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x80000 length 0x80000 00:13:20.720 nvme2n3 : 5.04 1753.54 6.85 0.00 0.00 72420.38 8771.74 67754.14 00:13:20.720 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x0 length 0x20000 00:13:20.720 nvme3n1 : 5.06 1923.08 7.51 0.00 0.00 65823.22 9023.80 68560.74 00:13:20.720 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:20.720 Verification LBA range: start 0x20000 length 0x20000 00:13:20.720 nvme3n1 : 5.05 1772.84 6.93 0.00 0.00 71507.18 1506.07 73803.62 00:13:20.720 [2024-12-16T10:45:20.709Z] =================================================================================================================== 00:13:20.720 [2024-12-16T10:45:20.709Z] Total : 23104.02 90.25 0.00 0.00 66038.84 1506.07 73803.62 00:13:20.983 00:13:20.983 real 0m5.812s 00:13:20.983 user 0m9.348s 00:13:20.983 sys 0m1.326s 00:13:20.983 10:45:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.983 10:45:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:20.983 ************************************ 00:13:20.983 END TEST bdev_verify 00:13:20.983 ************************************ 00:13:20.983 10:45:20 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:20.983 10:45:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:20.983 10:45:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.983 10:45:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.983 ************************************ 00:13:20.983 START TEST bdev_verify_big_io 00:13:20.983 ************************************ 00:13:20.983 10:45:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:20.983 [2024-12-16 10:45:20.945114] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:20.983 [2024-12-16 10:45:20.945230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81946 ] 00:13:21.244 [2024-12-16 10:45:21.081984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:21.244 [2024-12-16 10:45:21.125125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.244 [2024-12-16 10:45:21.125216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.505 Running I/O for 5 seconds... 00:13:27.356 1120.00 IOPS, 70.00 MiB/s [2024-12-16T10:45:27.606Z] 2510.00 IOPS, 156.88 MiB/s [2024-12-16T10:45:27.606Z] 3009.00 IOPS, 188.06 MiB/s 00:13:27.617 Latency(us) 00:13:27.617 [2024-12-16T10:45:27.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.617 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x0 length 0xa000 00:13:27.617 nvme0n1 : 5.79 110.53 6.91 0.00 0.00 1072751.22 165352.37 1084066.26 00:13:27.617 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0xa000 length 0xa000 00:13:27.617 nvme0n1 : 5.82 129.10 8.07 0.00 0.00 944758.82 139541.27 1639004.95 00:13:27.617 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x0 length 0xbd0b 00:13:27.617 nvme1n1 : 5.80 160.69 10.04 0.00 0.00 748391.26 47589.22 1542213.32 00:13:27.617 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:27.617 nvme1n1 : 5.83 153.73 9.61 0.00 0.00 763577.39 19459.15 896935.78 00:13:27.617 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x0 length 0x8000 00:13:27.617 nvme2n1 : 5.93 83.65 5.23 0.00 0.00 1400515.70 123409.33 2968276.68 00:13:27.617 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x8000 length 0x8000 00:13:27.617 nvme2n1 : 5.95 91.47 5.72 0.00 0.00 1284169.95 29642.44 2077793.67 00:13:27.617 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x0 length 0x8000 00:13:27.617 nvme2n2 : 5.93 151.01 9.44 0.00 0.00 751034.91 35893.56 851766.35 00:13:27.617 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x8000 length 0x8000 00:13:27.617 nvme2n2 : 5.93 137.58 8.60 0.00 0.00 823841.58 14014.62 806596.92 00:13:27.617 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x0 length 0x8000 00:13:27.617 nvme2n3 : 5.94 150.89 9.43 0.00 0.00 728585.23 27424.30 767880.27 00:13:27.617 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x8000 length 0x8000 00:13:27.617 nvme2n3 : 5.93 116.62 7.29 0.00 0.00 943391.01 50412.31 1729343.80 00:13:27.617 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x0 length 0x2000 00:13:27.617 nvme3n1 : 5.98 192.62 12.04 0.00 0.00 558927.66 784.54 1484138.34 00:13:27.617 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.617 Verification LBA range: start 0x2000 length 0x2000 00:13:27.618 nvme3n1 : 5.94 135.08 8.44 0.00 0.00 794280.78 6049.48 2219754.73 00:13:27.618 [2024-12-16T10:45:27.607Z] =================================================================================================================== 00:13:27.618 [2024-12-16T10:45:27.607Z] Total : 1612.98 100.81 0.00 0.00 851754.36 784.54 2968276.68 00:13:27.878 00:13:27.878 real 0m6.768s 00:13:27.878 user 0m12.383s 00:13:27.878 sys 0m0.478s 00:13:27.878 10:45:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.879 ************************************ 00:13:27.879 END TEST bdev_verify_big_io 00:13:27.879 ************************************ 00:13:27.879 10:45:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.879 10:45:27 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:27.879 10:45:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:27.879 10:45:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.879 10:45:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.879 ************************************ 00:13:27.879 START TEST bdev_write_zeroes 00:13:27.879 ************************************ 00:13:27.879 10:45:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:27.879 [2024-12-16 10:45:27.795047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:27.879 [2024-12-16 10:45:27.795184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82045 ] 00:13:28.140 [2024-12-16 10:45:27.930169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.140 [2024-12-16 10:45:27.979972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.401 Running I/O for 1 seconds... 00:13:29.345 78208.00 IOPS, 305.50 MiB/s 00:13:29.345 Latency(us) 00:13:29.345 [2024-12-16T10:45:29.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.345 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:29.345 nvme0n1 : 1.03 12841.24 50.16 0.00 0.00 9958.78 5419.32 23189.66 00:13:29.345 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:29.345 nvme1n1 : 1.02 13793.14 53.88 0.00 0.00 9261.76 4713.55 28432.54 00:13:29.345 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:29.345 nvme2n1 : 1.02 12649.50 49.41 0.00 0.00 10091.09 6225.92 27424.30 00:13:29.345 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:29.345 nvme2n2 : 1.02 12634.80 49.35 0.00 0.00 10044.74 5595.77 26012.75 00:13:29.345 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:29.345 nvme2n3 : 1.02 12620.42 49.30 0.00 0.00 10048.42 5595.77 26012.75 00:13:29.345 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:29.345 nvme3n1 : 1.03 12731.04 49.73 0.00 0.00 9949.95 4738.76 23592.96 00:13:29.345 [2024-12-16T10:45:29.334Z] =================================================================================================================== 00:13:29.345 [2024-12-16T10:45:29.334Z] Total : 77270.13 301.84 0.00 0.00 9883.58 4713.55 28432.54 00:13:29.607 00:13:29.607 real 0m1.760s 00:13:29.607 user 0m1.102s 00:13:29.607 sys 0m0.483s 00:13:29.607 10:45:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.607 10:45:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:29.607 ************************************ 00:13:29.607 END TEST bdev_write_zeroes 00:13:29.607 ************************************ 00:13:29.607 10:45:29 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.607 10:45:29 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:29.607 10:45:29 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.607 10:45:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.607 ************************************ 00:13:29.607 START TEST bdev_json_nonenclosed 00:13:29.607 ************************************ 00:13:29.607 10:45:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.869 [2024-12-16 10:45:29.628186] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:29.869 [2024-12-16 10:45:29.628327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82087 ] 00:13:29.869 [2024-12-16 10:45:29.765977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.869 [2024-12-16 10:45:29.817772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.869 [2024-12-16 10:45:29.817893] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:29.869 [2024-12-16 10:45:29.817912] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:29.869 [2024-12-16 10:45:29.817947] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:30.130 00:13:30.130 real 0m0.363s 00:13:30.130 user 0m0.151s 00:13:30.130 sys 0m0.107s 00:13:30.130 10:45:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.130 10:45:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:30.130 ************************************ 00:13:30.130 END TEST bdev_json_nonenclosed 00:13:30.130 ************************************ 00:13:30.130 10:45:29 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:30.130 10:45:29 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:30.130 10:45:29 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.130 10:45:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.130 ************************************ 00:13:30.130 START TEST bdev_json_nonarray 00:13:30.130 ************************************ 00:13:30.130 10:45:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:30.130 [2024-12-16 10:45:30.070767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:30.130 [2024-12-16 10:45:30.070969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82107 ] 00:13:30.392 [2024-12-16 10:45:30.207939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.392 [2024-12-16 10:45:30.261127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.392 [2024-12-16 10:45:30.261263] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:30.392 [2024-12-16 10:45:30.261282] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:30.392 [2024-12-16 10:45:30.261295] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:30.392 00:13:30.392 real 0m0.377s 00:13:30.392 user 0m0.155s 00:13:30.392 sys 0m0.116s 00:13:30.392 10:45:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.392 ************************************ 00:13:30.392 END TEST bdev_json_nonarray 00:13:30.392 ************************************ 00:13:30.392 10:45:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:30.653 10:45:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:31.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:43.475 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.475 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.475 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.475 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.475 00:13:43.475 real 0m58.353s 00:13:43.475 user 1m18.064s 00:13:43.475 sys 0m38.007s 00:13:43.475 10:45:42 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.475 10:45:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.475 ************************************ 00:13:43.475 END TEST blockdev_xnvme 00:13:43.475 ************************************ 00:13:43.475 10:45:42 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:43.475 10:45:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:43.475 10:45:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.475 10:45:42 -- common/autotest_common.sh@10 -- # set +x 00:13:43.475 ************************************ 00:13:43.475 START TEST ublk 00:13:43.475 ************************************ 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:43.475 * Looking for test storage... 00:13:43.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:43.475 10:45:42 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.475 10:45:42 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.475 10:45:42 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.475 10:45:42 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.475 10:45:42 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.475 10:45:42 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.475 10:45:42 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.475 10:45:42 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.475 10:45:42 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.475 10:45:42 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.475 10:45:42 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.475 10:45:42 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:43.475 10:45:42 ublk -- scripts/common.sh@345 -- # : 1 00:13:43.475 10:45:42 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.475 10:45:42 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.475 10:45:42 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:43.475 10:45:42 ublk -- scripts/common.sh@353 -- # local d=1 00:13:43.475 10:45:42 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.475 10:45:42 ublk -- scripts/common.sh@355 -- # echo 1 00:13:43.475 10:45:42 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.475 10:45:42 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:43.475 10:45:42 ublk -- scripts/common.sh@353 -- # local d=2 00:13:43.475 10:45:42 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.475 10:45:42 ublk -- scripts/common.sh@355 -- # echo 2 00:13:43.475 10:45:42 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.475 10:45:42 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.475 10:45:42 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.475 10:45:42 ublk -- scripts/common.sh@368 -- # return 0 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:43.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.475 --rc genhtml_branch_coverage=1 00:13:43.475 --rc genhtml_function_coverage=1 00:13:43.475 --rc genhtml_legend=1 00:13:43.475 --rc geninfo_all_blocks=1 00:13:43.475 --rc geninfo_unexecuted_blocks=1 00:13:43.475 00:13:43.475 ' 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:43.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.475 --rc genhtml_branch_coverage=1 00:13:43.475 --rc genhtml_function_coverage=1 00:13:43.475 --rc genhtml_legend=1 00:13:43.475 --rc geninfo_all_blocks=1 00:13:43.475 --rc geninfo_unexecuted_blocks=1 00:13:43.475 00:13:43.475 ' 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:43.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.475 --rc genhtml_branch_coverage=1 00:13:43.475 --rc genhtml_function_coverage=1 00:13:43.475 --rc genhtml_legend=1 00:13:43.475 --rc geninfo_all_blocks=1 00:13:43.475 --rc geninfo_unexecuted_blocks=1 00:13:43.475 00:13:43.475 ' 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:43.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.475 --rc genhtml_branch_coverage=1 00:13:43.475 --rc genhtml_function_coverage=1 00:13:43.475 --rc genhtml_legend=1 00:13:43.475 --rc geninfo_all_blocks=1 00:13:43.475 --rc geninfo_unexecuted_blocks=1 00:13:43.475 00:13:43.475 ' 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:43.475 10:45:42 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:43.475 10:45:42 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:43.475 10:45:42 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:43.475 10:45:42 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:43.475 10:45:42 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:43.475 10:45:42 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:43.475 10:45:42 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:43.475 10:45:42 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:43.475 10:45:42 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.475 10:45:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.475 ************************************ 00:13:43.475 START TEST test_save_ublk_config 00:13:43.475 ************************************ 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82403 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82403 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82403 ']' 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.475 10:45:42 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:43.475 [2024-12-16 10:45:42.397906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:43.475 [2024-12-16 10:45:42.398062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82403 ] 00:13:43.475 [2024-12-16 10:45:42.537185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.475 [2024-12-16 10:45:42.605777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.475 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.475 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:43.475 10:45:43 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:43.475 10:45:43 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:43.475 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.475 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.475 [2024-12-16 10:45:43.245947] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:43.475 [2024-12-16 10:45:43.246211] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:43.475 malloc0 00:13:43.475 [2024-12-16 10:45:43.270043] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:43.475 [2024-12-16 10:45:43.270105] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:43.475 [2024-12-16 10:45:43.270112] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:43.475 [2024-12-16 10:45:43.270123] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:43.475 [2024-12-16 10:45:43.279019] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:43.476 [2024-12-16 10:45:43.279043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:43.476 [2024-12-16 10:45:43.281611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:43.476 [2024-12-16 10:45:43.281711] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:43.476 [2024-12-16 10:45:43.291986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:43.476 0 00:13:43.476 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.476 10:45:43 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:43.476 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.476 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.737 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.737 10:45:43 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:43.737 "subsystems": [ 00:13:43.737 { 00:13:43.737 "subsystem": "fsdev", 00:13:43.737 "config": [ 00:13:43.737 { 00:13:43.737 "method": "fsdev_set_opts", 00:13:43.737 "params": { 00:13:43.737 "fsdev_io_pool_size": 65535, 00:13:43.737 "fsdev_io_cache_size": 256 00:13:43.737 } 00:13:43.737 } 00:13:43.737 ] 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "subsystem": "keyring", 00:13:43.737 "config": [] 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "subsystem": "iobuf", 00:13:43.737 "config": [ 00:13:43.737 { 00:13:43.737 "method": "iobuf_set_options", 00:13:43.737 "params": { 00:13:43.737 "small_pool_count": 8192, 00:13:43.737 "large_pool_count": 1024, 00:13:43.737 "small_bufsize": 8192, 00:13:43.737 "large_bufsize": 135168 00:13:43.737 } 00:13:43.737 } 00:13:43.737 ] 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "subsystem": "sock", 00:13:43.737 "config": [ 00:13:43.737 { 00:13:43.737 "method": "sock_set_default_impl", 00:13:43.737 "params": { 00:13:43.737 "impl_name": "posix" 00:13:43.737 } 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "method": "sock_impl_set_options", 00:13:43.737 "params": { 00:13:43.737 "impl_name": "ssl", 00:13:43.737 "recv_buf_size": 4096, 00:13:43.737 "send_buf_size": 4096, 00:13:43.737 "enable_recv_pipe": true, 00:13:43.737 "enable_quickack": false, 00:13:43.737 "enable_placement_id": 0, 00:13:43.737 "enable_zerocopy_send_server": true, 00:13:43.737 "enable_zerocopy_send_client": false, 00:13:43.737 "zerocopy_threshold": 0, 00:13:43.737 "tls_version": 0, 00:13:43.737 "enable_ktls": false 00:13:43.737 } 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "method": "sock_impl_set_options", 00:13:43.737 "params": { 00:13:43.737 "impl_name": "posix", 00:13:43.737 "recv_buf_size": 2097152, 00:13:43.737 "send_buf_size": 2097152, 00:13:43.737 "enable_recv_pipe": true, 00:13:43.737 "enable_quickack": false, 00:13:43.737 "enable_placement_id": 0, 00:13:43.737 "enable_zerocopy_send_server": true, 00:13:43.737 "enable_zerocopy_send_client": false, 00:13:43.737 "zerocopy_threshold": 0, 00:13:43.737 "tls_version": 0, 00:13:43.737 "enable_ktls": false 00:13:43.737 } 00:13:43.737 } 00:13:43.737 ] 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "subsystem": "vmd", 00:13:43.737 "config": [] 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "subsystem": "accel", 00:13:43.737 "config": [ 00:13:43.737 { 00:13:43.737 "method": "accel_set_options", 00:13:43.737 "params": { 00:13:43.737 "small_cache_size": 128, 00:13:43.737 "large_cache_size": 16, 00:13:43.737 "task_count": 2048, 00:13:43.737 "sequence_count": 2048, 00:13:43.737 "buf_count": 2048 00:13:43.737 } 00:13:43.737 } 00:13:43.737 ] 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "subsystem": "bdev", 00:13:43.737 "config": [ 00:13:43.737 { 00:13:43.737 "method": "bdev_set_options", 00:13:43.737 "params": { 00:13:43.737 "bdev_io_pool_size": 65535, 00:13:43.737 "bdev_io_cache_size": 256, 00:13:43.737 "bdev_auto_examine": true, 00:13:43.737 "iobuf_small_cache_size": 128, 00:13:43.737 "iobuf_large_cache_size": 16 00:13:43.737 } 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "method": "bdev_raid_set_options", 00:13:43.737 "params": { 00:13:43.737 "process_window_size_kb": 1024, 00:13:43.737 "process_max_bandwidth_mb_sec": 0 00:13:43.737 } 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "method": "bdev_iscsi_set_options", 00:13:43.737 "params": { 00:13:43.737 "timeout_sec": 30 00:13:43.737 } 00:13:43.737 }, 00:13:43.737 { 00:13:43.737 "method": "bdev_nvme_set_options", 00:13:43.737 "params": { 00:13:43.737 "action_on_timeout": "none", 00:13:43.737 "timeout_us": 0, 00:13:43.737 "timeout_admin_us": 0, 00:13:43.737 "keep_alive_timeout_ms": 10000, 00:13:43.737 "arbitration_burst": 0, 00:13:43.737 "low_priority_weight": 0, 00:13:43.737 "medium_priority_weight": 0, 00:13:43.737 "high_priority_weight": 0, 00:13:43.737 "nvme_adminq_poll_period_us": 10000, 00:13:43.737 "nvme_ioq_poll_period_us": 0, 00:13:43.737 "io_queue_requests": 0, 00:13:43.737 "delay_cmd_submit": true, 00:13:43.737 "transport_retry_count": 4, 00:13:43.737 "bdev_retry_count": 3, 00:13:43.737 "transport_ack_timeout": 0, 00:13:43.737 "ctrlr_loss_timeout_sec": 0, 00:13:43.737 "reconnect_delay_sec": 0, 00:13:43.737 "fast_io_fail_timeout_sec": 0, 00:13:43.737 "disable_auto_failback": false, 00:13:43.737 "generate_uuids": false, 00:13:43.737 "transport_tos": 0, 00:13:43.738 "nvme_error_stat": false, 00:13:43.738 "rdma_srq_size": 0, 00:13:43.738 "io_path_stat": false, 00:13:43.738 "allow_accel_sequence": false, 00:13:43.738 "rdma_max_cq_size": 0, 00:13:43.738 "rdma_cm_event_timeout_ms": 0, 00:13:43.738 "dhchap_digests": [ 00:13:43.738 "sha256", 00:13:43.738 "sha384", 00:13:43.738 "sha512" 00:13:43.738 ], 00:13:43.738 "dhchap_dhgroups": [ 00:13:43.738 "null", 00:13:43.738 "ffdhe2048", 00:13:43.738 "ffdhe3072", 00:13:43.738 "ffdhe4096", 00:13:43.738 "ffdhe6144", 00:13:43.738 "ffdhe8192" 00:13:43.738 ] 00:13:43.738 } 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "method": "bdev_nvme_set_hotplug", 00:13:43.738 "params": { 00:13:43.738 "period_us": 100000, 00:13:43.738 "enable": false 00:13:43.738 } 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "method": "bdev_malloc_create", 00:13:43.738 "params": { 00:13:43.738 "name": "malloc0", 00:13:43.738 "num_blocks": 8192, 00:13:43.738 "block_size": 4096, 00:13:43.738 "physical_block_size": 4096, 00:13:43.738 "uuid": "77332545-8382-412e-9211-be816bb83273", 00:13:43.738 "optimal_io_boundary": 0, 00:13:43.738 "md_size": 0, 00:13:43.738 "dif_type": 0, 00:13:43.738 "dif_is_head_of_md": false, 00:13:43.738 "dif_pi_format": 0 00:13:43.738 } 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "method": "bdev_wait_for_examine" 00:13:43.738 } 00:13:43.738 ] 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "subsystem": "scsi", 00:13:43.738 "config": null 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "subsystem": "scheduler", 00:13:43.738 "config": [ 00:13:43.738 { 00:13:43.738 "method": "framework_set_scheduler", 00:13:43.738 "params": { 00:13:43.738 "name": "static" 00:13:43.738 } 00:13:43.738 } 00:13:43.738 ] 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "subsystem": "vhost_scsi", 00:13:43.738 "config": [] 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "subsystem": "vhost_blk", 00:13:43.738 "config": [] 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "subsystem": "ublk", 00:13:43.738 "config": [ 00:13:43.738 { 00:13:43.738 "method": "ublk_create_target", 00:13:43.738 "params": { 00:13:43.738 "cpumask": "1" 00:13:43.738 } 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "method": "ublk_start_disk", 00:13:43.738 "params": { 00:13:43.738 "bdev_name": "malloc0", 00:13:43.738 "ublk_id": 0, 00:13:43.738 "num_queues": 1, 00:13:43.738 "queue_depth": 128 00:13:43.738 } 00:13:43.738 } 00:13:43.738 ] 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "subsystem": "nbd", 00:13:43.738 "config": [] 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "subsystem": "nvmf", 00:13:43.738 "config": [ 00:13:43.738 { 00:13:43.738 "method": "nvmf_set_config", 00:13:43.738 "params": { 00:13:43.738 "discovery_filter": "match_any", 00:13:43.738 "admin_cmd_passthru": { 00:13:43.738 "identify_ctrlr": false 00:13:43.738 }, 00:13:43.738 "dhchap_digests": [ 00:13:43.738 "sha256", 00:13:43.738 "sha384", 00:13:43.738 "sha512" 00:13:43.738 ], 00:13:43.738 "dhchap_dhgroups": [ 00:13:43.738 "null", 00:13:43.738 "ffdhe2048", 00:13:43.738 "ffdhe3072", 00:13:43.738 "ffdhe4096", 00:13:43.738 "ffdhe6144", 00:13:43.738 "ffdhe8192" 00:13:43.738 ] 00:13:43.738 } 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "method": "nvmf_set_max_subsystems", 00:13:43.738 "params": { 00:13:43.738 "max_subsystems": 1024 00:13:43.738 } 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "method": "nvmf_set_crdt", 00:13:43.738 "params": { 00:13:43.738 "crdt1": 0, 00:13:43.738 "crdt2": 0, 00:13:43.738 "crdt3": 0 00:13:43.738 } 00:13:43.738 } 00:13:43.738 ] 00:13:43.738 }, 00:13:43.738 { 00:13:43.738 "subsystem": "iscsi", 00:13:43.738 "config": [ 00:13:43.738 { 00:13:43.738 "method": "iscsi_set_options", 00:13:43.738 "params": { 00:13:43.738 "node_base": "iqn.2016-06.io.spdk", 00:13:43.738 "max_sessions": 128, 00:13:43.738 "max_connections_per_session": 2, 00:13:43.738 "max_queue_depth": 64, 00:13:43.738 "default_time2wait": 2, 00:13:43.738 "default_time2retain": 20, 00:13:43.738 "first_burst_length": 8192, 00:13:43.738 "immediate_data": true, 00:13:43.738 "allow_duplicated_isid": false, 00:13:43.738 "error_recovery_level": 0, 00:13:43.738 "nop_timeout": 60, 00:13:43.738 "nop_in_interval": 30, 00:13:43.738 "disable_chap": false, 00:13:43.738 "require_chap": false, 00:13:43.738 "mutual_chap": false, 00:13:43.738 "chap_group": 0, 00:13:43.738 "max_large_datain_per_connection": 64, 00:13:43.738 "max_r2t_per_connection": 4, 00:13:43.738 "pdu_pool_size": 36864, 00:13:43.738 "immediate_data_pool_size": 16384, 00:13:43.738 "data_out_pool_size": 2048 00:13:43.738 } 00:13:43.738 } 00:13:43.738 ] 00:13:43.738 } 00:13:43.738 ] 00:13:43.738 }' 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82403 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82403 ']' 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82403 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82403 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.738 killing process with pid 82403 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82403' 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82403 00:13:43.738 10:45:43 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82403 00:13:43.999 [2024-12-16 10:45:43.776483] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:43.999 [2024-12-16 10:45:43.815004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:43.999 [2024-12-16 10:45:43.815147] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:43.999 [2024-12-16 10:45:43.820955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:43.999 [2024-12-16 10:45:43.821006] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:43.999 [2024-12-16 10:45:43.821013] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:43.999 [2024-12-16 10:45:43.821034] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:43.999 [2024-12-16 10:45:43.821167] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82435 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82435 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82435 ']' 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:44.260 10:45:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:44.260 "subsystems": [ 00:13:44.260 { 00:13:44.260 "subsystem": "fsdev", 00:13:44.260 "config": [ 00:13:44.260 { 00:13:44.260 "method": "fsdev_set_opts", 00:13:44.260 "params": { 00:13:44.260 "fsdev_io_pool_size": 65535, 00:13:44.260 "fsdev_io_cache_size": 256 00:13:44.260 } 00:13:44.260 } 00:13:44.260 ] 00:13:44.260 }, 00:13:44.260 { 00:13:44.260 "subsystem": "keyring", 00:13:44.260 "config": [] 00:13:44.260 }, 00:13:44.260 { 00:13:44.260 "subsystem": "iobuf", 00:13:44.260 "config": [ 00:13:44.260 { 00:13:44.260 "method": "iobuf_set_options", 00:13:44.260 "params": { 00:13:44.260 "small_pool_count": 8192, 00:13:44.260 "large_pool_count": 1024, 00:13:44.260 "small_bufsize": 8192, 00:13:44.260 "large_bufsize": 135168 00:13:44.260 } 00:13:44.260 } 00:13:44.260 ] 00:13:44.260 }, 00:13:44.260 { 00:13:44.260 "subsystem": "sock", 00:13:44.260 "config": [ 00:13:44.260 { 00:13:44.260 "method": "sock_set_default_impl", 00:13:44.260 "params": { 00:13:44.260 "impl_name": "posix" 00:13:44.260 } 00:13:44.260 }, 00:13:44.260 { 00:13:44.260 "method": "sock_impl_set_options", 00:13:44.260 "params": { 00:13:44.260 "impl_name": "ssl", 00:13:44.260 "recv_buf_size": 4096, 00:13:44.260 "send_buf_size": 4096, 00:13:44.261 "enable_recv_pipe": true, 00:13:44.261 "enable_quickack": false, 00:13:44.261 "enable_placement_id": 0, 00:13:44.261 "enable_zerocopy_send_server": true, 00:13:44.261 "enable_zerocopy_send_client": false, 00:13:44.261 "zerocopy_threshold": 0, 00:13:44.261 "tls_version": 0, 00:13:44.261 "enable_ktls": false 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "sock_impl_set_options", 00:13:44.261 "params": { 00:13:44.261 "impl_name": "posix", 00:13:44.261 "recv_buf_size": 2097152, 00:13:44.261 "send_buf_size": 2097152, 00:13:44.261 "enable_recv_pipe": true, 00:13:44.261 "enable_quickack": false, 00:13:44.261 "enable_placement_id": 0, 00:13:44.261 "enable_zerocopy_send_server": true, 00:13:44.261 "enable_zerocopy_send_client": false, 00:13:44.261 "zerocopy_threshold": 0, 00:13:44.261 "tls_version": 0, 00:13:44.261 "enable_ktls": false 00:13:44.261 } 00:13:44.261 } 00:13:44.261 ] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "vmd", 00:13:44.261 "config": [] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "accel", 00:13:44.261 "config": [ 00:13:44.261 { 00:13:44.261 "method": "accel_set_options", 00:13:44.261 "params": { 00:13:44.261 "small_cache_size": 128, 00:13:44.261 "large_cache_size": 16, 00:13:44.261 "task_count": 2048, 00:13:44.261 "sequence_count": 2048, 00:13:44.261 "buf_count": 2048 00:13:44.261 } 00:13:44.261 } 00:13:44.261 ] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "bdev", 00:13:44.261 "config": [ 00:13:44.261 { 00:13:44.261 "method": "bdev_set_options", 00:13:44.261 "params": { 00:13:44.261 "bdev_io_pool_size": 65535, 00:13:44.261 "bdev_io_cache_size": 256, 00:13:44.261 "bdev_auto_examine": true, 00:13:44.261 "iobuf_small_cache_size": 128, 00:13:44.261 "iobuf_large_cache_size": 16 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "bdev_raid_set_options", 00:13:44.261 "params": { 00:13:44.261 "process_window_size_kb": 1024, 00:13:44.261 "process_max_bandwidth_mb_sec": 0 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "bdev_iscsi_set_options", 00:13:44.261 "params": { 00:13:44.261 "timeout_sec": 30 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "bdev_nvme_set_options", 00:13:44.261 "params": { 00:13:44.261 "action_on_timeout": "none", 00:13:44.261 "timeout_us": 0, 00:13:44.261 "timeout_admin_us": 0, 00:13:44.261 "keep_alive_timeout_ms": 10000, 00:13:44.261 "arbitration_burst": 0, 00:13:44.261 "low_priority_weight": 0, 00:13:44.261 "medium_priority_weight": 0, 00:13:44.261 "high_priority_weight": 0, 00:13:44.261 "nvme_adminq_poll_period_us": 10000, 00:13:44.261 "nvme_ioq_poll_period_us": 0, 00:13:44.261 "io_queue_requests": 0, 00:13:44.261 "delay_cmd_submit": true, 00:13:44.261 "transport_retry_count": 4, 00:13:44.261 "bdev_retry_count": 3, 00:13:44.261 "transport_ack_timeout": 0, 00:13:44.261 "ctrlr_loss_timeout_sec": 0, 00:13:44.261 "reconnect_delay_sec": 0, 00:13:44.261 "fast_io_fail_timeout_sec": 0, 00:13:44.261 "disable_auto_failback": false, 00:13:44.261 "generate_uuids": false, 00:13:44.261 "transport_tos": 0, 00:13:44.261 "nvme_error_stat": false, 00:13:44.261 "rdma_srq_size": 0, 00:13:44.261 "io_path_stat": false, 00:13:44.261 "allow_accel_sequence": false, 00:13:44.261 "rdma_max_cq_size": 0, 00:13:44.261 "rdma_cm_event_timeout_ms": 0, 00:13:44.261 "dhchap_digests": [ 00:13:44.261 "sha256", 00:13:44.261 "sha384", 00:13:44.261 "sha512" 00:13:44.261 ], 00:13:44.261 "dhchap_dhgroups": [ 00:13:44.261 "null", 00:13:44.261 "ffdhe2048", 00:13:44.261 "ffdhe3072", 00:13:44.261 "ffdhe4096", 00:13:44.261 "ffdhe6144", 00:13:44.261 "ffdhe8192" 00:13:44.261 ] 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "bdev_nvme_set_hotplug", 00:13:44.261 "params": { 00:13:44.261 "period_us": 100000, 00:13:44.261 "enable": false 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "bdev_malloc_create", 00:13:44.261 "params": { 00:13:44.261 "name": "malloc0", 00:13:44.261 "num_blocks": 8192, 00:13:44.261 "block_size": 4096, 00:13:44.261 "physical_block_size": 4096, 00:13:44.261 "uuid": "77332545-8382-412e-9211-be816bb83273", 00:13:44.261 "optimal_io_boundary": 0, 00:13:44.261 "md_size": 0, 00:13:44.261 "dif_type": 0, 00:13:44.261 "dif_is_head_of_md": false, 00:13:44.261 "dif_pi_format": 0 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "bdev_wait_for_examine" 00:13:44.261 } 00:13:44.261 ] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "scsi", 00:13:44.261 "config": null 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "scheduler", 00:13:44.261 "config": [ 00:13:44.261 { 00:13:44.261 "method": "framework_set_scheduler", 00:13:44.261 "params": { 00:13:44.261 "name": "static" 00:13:44.261 } 00:13:44.261 } 00:13:44.261 ] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "vhost_scsi", 00:13:44.261 "config": [] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "vhost_blk", 00:13:44.261 "config": [] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "ublk", 00:13:44.261 "config": [ 00:13:44.261 { 00:13:44.261 "method": "ublk_create_target", 00:13:44.261 "params": { 00:13:44.261 "cpumask": "1" 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "ublk_start_disk", 00:13:44.261 "params": { 00:13:44.261 "bdev_name": "malloc0", 00:13:44.261 "ublk_id": 0, 00:13:44.261 "num_queues": 1, 00:13:44.261 "queue_depth": 128 00:13:44.261 } 00:13:44.261 } 00:13:44.261 ] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "nbd", 00:13:44.261 "config": [] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "nvmf", 00:13:44.261 "config": [ 00:13:44.261 { 00:13:44.261 "method": "nvmf_set_config", 00:13:44.261 "params": { 00:13:44.261 "discovery_filter": "match_any", 00:13:44.261 "admin_cmd_passthru": { 00:13:44.261 "identify_ctrlr": false 00:13:44.261 }, 00:13:44.261 "dhchap_digests": [ 00:13:44.261 "sha256", 00:13:44.261 "sha384", 00:13:44.261 "sha512" 00:13:44.261 ], 00:13:44.261 "dhchap_dhgroups": [ 00:13:44.261 "null", 00:13:44.261 "ffdhe2048", 00:13:44.261 "ffdhe3072", 00:13:44.261 "ffdhe4096", 00:13:44.261 "ffdhe6144", 00:13:44.261 "ffdhe8192" 00:13:44.261 ] 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "nvmf_set_max_subsystems", 00:13:44.261 "params": { 00:13:44.261 "max_subsystems": 1024 00:13:44.261 } 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "method": "nvmf_set_crdt", 00:13:44.261 "params": { 00:13:44.261 "crdt1": 0, 00:13:44.261 "crdt2": 0, 00:13:44.261 "crdt3": 0 00:13:44.261 } 00:13:44.261 } 00:13:44.261 ] 00:13:44.261 }, 00:13:44.261 { 00:13:44.261 "subsystem": "iscsi", 00:13:44.261 "config": [ 00:13:44.261 { 00:13:44.261 "method": "iscsi_set_options", 00:13:44.261 "params": { 00:13:44.261 "node_base": "iqn.2016-06.io.spdk", 00:13:44.261 "max_sessions": 128, 00:13:44.261 "max_connections_per_session": 2, 00:13:44.261 "max_queue_depth": 64, 00:13:44.261 "default_time2wait": 2, 00:13:44.261 "default_time2retain": 20, 00:13:44.261 "first_burst_length": 8192, 00:13:44.261 "immediate_data": true, 00:13:44.261 "allow_duplicated_isid": false, 00:13:44.261 "error_recovery_level": 0, 00:13:44.261 "nop_timeout": 60, 00:13:44.261 "nop_in_interval": 30, 00:13:44.261 "disable_chap": false, 00:13:44.261 "require_chap": false, 00:13:44.261 "mutual_chap": false, 00:13:44.261 "chap_group": 0, 00:13:44.261 "max_large_datain_per_connection": 64, 00:13:44.261 "max_r2t_per_connection": 4, 00:13:44.261 "pdu_pool_size": 36864, 00:13:44.261 "immediate_data_pool_size": 16384, 00:13:44.261 "data_out_pool_size": 2048 00:13:44.261 } 00:13:44.261 } 00:13:44.261 ] 00:13:44.261 } 00:13:44.261 ] 00:13:44.261 }' 00:13:44.261 [2024-12-16 10:45:44.236383] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:44.261 [2024-12-16 10:45:44.236500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82435 ] 00:13:44.522 [2024-12-16 10:45:44.363821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.522 [2024-12-16 10:45:44.405945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.783 [2024-12-16 10:45:44.715944] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:44.783 [2024-12-16 10:45:44.716207] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:44.783 [2024-12-16 10:45:44.724046] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:44.783 [2024-12-16 10:45:44.724109] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:44.783 [2024-12-16 10:45:44.724116] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:44.783 [2024-12-16 10:45:44.724122] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.783 [2024-12-16 10:45:44.733017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.783 [2024-12-16 10:45:44.733035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.783 [2024-12-16 10:45:44.739958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.783 [2024-12-16 10:45:44.740040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:44.783 [2024-12-16 10:45:44.756945] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82435 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82435 ']' 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82435 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82435 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:45.354 killing process with pid 82435 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82435' 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82435 00:13:45.354 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82435 00:13:45.354 [2024-12-16 10:45:45.316455] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.615 [2024-12-16 10:45:45.355026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.615 [2024-12-16 10:45:45.355149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.615 [2024-12-16 10:45:45.362975] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.615 [2024-12-16 10:45:45.363024] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:45.615 [2024-12-16 10:45:45.363032] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:45.615 [2024-12-16 10:45:45.363060] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:45.615 [2024-12-16 10:45:45.363194] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:45.876 10:45:45 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:45.876 00:13:45.876 real 0m3.395s 00:13:45.876 user 0m2.395s 00:13:45.876 sys 0m1.636s 00:13:45.876 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.876 ************************************ 00:13:45.876 END TEST test_save_ublk_config 00:13:45.876 ************************************ 00:13:45.876 10:45:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:45.876 10:45:45 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82486 00:13:45.876 10:45:45 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:45.876 10:45:45 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82486 00:13:45.876 10:45:45 ublk -- common/autotest_common.sh@831 -- # '[' -z 82486 ']' 00:13:45.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.876 10:45:45 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.876 10:45:45 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.876 10:45:45 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.876 10:45:45 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:45.876 10:45:45 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.876 10:45:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:45.876 [2024-12-16 10:45:45.827455] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:45.876 [2024-12-16 10:45:45.827575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82486 ] 00:13:46.137 [2024-12-16 10:45:45.963681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.137 [2024-12-16 10:45:46.001981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.137 [2024-12-16 10:45:46.001997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.082 10:45:46 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.082 10:45:46 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:47.082 10:45:46 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:47.082 10:45:46 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:47.082 10:45:46 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.082 10:45:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.082 ************************************ 00:13:47.082 START TEST test_create_ublk 00:13:47.082 ************************************ 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.082 [2024-12-16 10:45:46.748948] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:47.082 [2024-12-16 10:45:46.750072] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.082 [2024-12-16 10:45:46.813065] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:47.082 [2024-12-16 10:45:46.813465] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:47.082 [2024-12-16 10:45:46.813479] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:47.082 [2024-12-16 10:45:46.813493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:47.082 [2024-12-16 10:45:46.824943] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:47.082 [2024-12-16 10:45:46.824966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:47.082 [2024-12-16 10:45:46.835949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:47.082 [2024-12-16 10:45:46.836557] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:47.082 [2024-12-16 10:45:46.854958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.082 10:45:46 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:47.082 { 00:13:47.082 "ublk_device": "/dev/ublkb0", 00:13:47.082 "id": 0, 00:13:47.082 "queue_depth": 512, 00:13:47.082 "num_queues": 4, 00:13:47.082 "bdev_name": "Malloc0" 00:13:47.082 } 00:13:47.082 ]' 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:47.082 10:45:46 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:47.082 10:45:47 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:47.082 10:45:47 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:47.083 10:45:47 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:47.344 fio: verification read phase will never start because write phase uses all of runtime 00:13:47.344 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:47.344 fio-3.35 00:13:47.344 Starting 1 process 00:13:57.326 00:13:57.326 fio_test: (groupid=0, jobs=1): err= 0: pid=82530: Mon Dec 16 10:45:57 2024 00:13:57.326 write: IOPS=17.1k, BW=66.6MiB/s (69.9MB/s)(666MiB/10001msec); 0 zone resets 00:13:57.326 clat (usec): min=34, max=4244, avg=57.77, stdev=121.49 00:13:57.326 lat (usec): min=34, max=4244, avg=58.26, stdev=121.51 00:13:57.326 clat percentiles (usec): 00:13:57.326 | 1.00th=[ 40], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 47], 00:13:57.326 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 53], 00:13:57.326 | 70.00th=[ 55], 80.00th=[ 58], 90.00th=[ 62], 95.00th=[ 68], 00:13:57.326 | 99.00th=[ 80], 99.50th=[ 176], 99.90th=[ 2638], 99.95th=[ 3392], 00:13:57.326 | 99.99th=[ 3982] 00:13:57.326 bw ( KiB/s): min=23648, max=78008, per=99.55%, avg=67920.84, stdev=12232.36, samples=19 00:13:57.326 iops : min= 5912, max=19502, avg=16980.21, stdev=3058.09, samples=19 00:13:57.326 lat (usec) : 50=42.40%, 100=57.01%, 250=0.32%, 500=0.06%, 750=0.01% 00:13:57.326 lat (usec) : 1000=0.01% 00:13:57.326 lat (msec) : 2=0.05%, 4=0.12%, 10=0.01% 00:13:57.326 cpu : usr=3.00%, sys=14.44%, ctx=170622, majf=0, minf=796 00:13:57.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.327 issued rwts: total=0,170587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.327 00:13:57.327 Run status group 0 (all jobs): 00:13:57.327 WRITE: bw=66.6MiB/s (69.9MB/s), 66.6MiB/s-66.6MiB/s (69.9MB/s-69.9MB/s), io=666MiB (699MB), run=10001-10001msec 00:13:57.327 00:13:57.327 Disk stats (read/write): 00:13:57.327 ublkb0: ios=0/168713, merge=0/0, ticks=0/8097, in_queue=8097, util=99.05% 00:13:57.327 10:45:57 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:57.327 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.327 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.327 [2024-12-16 10:45:57.256242] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.327 [2024-12-16 10:45:57.303978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.327 [2024-12-16 10:45:57.304558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.327 [2024-12-16 10:45:57.311954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.327 [2024-12-16 10:45:57.312189] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:57.327 [2024-12-16 10:45:57.312200] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.585 10:45:57 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 [2024-12-16 10:45:57.327015] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:57.585 request: 00:13:57.585 { 00:13:57.585 "ublk_id": 0, 00:13:57.585 "method": "ublk_stop_disk", 00:13:57.585 "req_id": 1 00:13:57.585 } 00:13:57.585 Got JSON-RPC error response 00:13:57.585 response: 00:13:57.585 { 00:13:57.585 "code": -19, 00:13:57.585 "message": "No such device" 00:13:57.585 } 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.585 10:45:57 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 [2024-12-16 10:45:57.336011] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:57.585 [2024-12-16 10:45:57.337064] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:57.585 [2024-12-16 10:45:57.337091] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.585 10:45:57 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.585 10:45:57 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:57.585 10:45:57 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.585 10:45:57 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:57.585 10:45:57 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:57.585 10:45:57 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:57.585 10:45:57 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.585 10:45:57 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:57.585 10:45:57 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:57.585 10:45:57 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:57.585 00:13:57.585 real 0m10.750s 00:13:57.585 user 0m0.584s 00:13:57.585 sys 0m1.522s 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.585 ************************************ 00:13:57.585 END TEST test_create_ublk 00:13:57.585 ************************************ 00:13:57.585 10:45:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 10:45:57 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:57.585 10:45:57 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:57.585 10:45:57 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.585 10:45:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 ************************************ 00:13:57.585 START TEST test_create_multi_ublk 00:13:57.585 ************************************ 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 [2024-12-16 10:45:57.542939] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:57.585 [2024-12-16 10:45:57.543836] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.585 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.844 [2024-12-16 10:45:57.627049] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:57.844 [2024-12-16 10:45:57.627349] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:57.844 [2024-12-16 10:45:57.627361] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:57.844 [2024-12-16 10:45:57.627366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:57.844 [2024-12-16 10:45:57.638983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:57.844 [2024-12-16 10:45:57.639000] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:57.844 [2024-12-16 10:45:57.650950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:57.844 [2024-12-16 10:45:57.651434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:57.844 [2024-12-16 10:45:57.680954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.844 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.844 [2024-12-16 10:45:57.763057] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:57.844 [2024-12-16 10:45:57.763356] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:57.844 [2024-12-16 10:45:57.763368] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:57.844 [2024-12-16 10:45:57.763375] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:57.844 [2024-12-16 10:45:57.776963] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:57.844 [2024-12-16 10:45:57.776982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:57.844 [2024-12-16 10:45:57.788952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:57.844 [2024-12-16 10:45:57.789445] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:57.844 [2024-12-16 10:45:57.824952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.102 [2024-12-16 10:45:57.909038] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:58.102 [2024-12-16 10:45:57.909344] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:58.102 [2024-12-16 10:45:57.909357] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:58.102 [2024-12-16 10:45:57.909362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:58.102 [2024-12-16 10:45:57.920960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:58.102 [2024-12-16 10:45:57.920978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:58.102 [2024-12-16 10:45:57.932950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:58.102 [2024-12-16 10:45:57.933429] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:58.102 [2024-12-16 10:45:57.957954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.102 10:45:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.102 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.102 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:58.102 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:58.103 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.103 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.103 [2024-12-16 10:45:58.041039] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:58.103 [2024-12-16 10:45:58.041341] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:58.103 [2024-12-16 10:45:58.041353] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:58.103 [2024-12-16 10:45:58.041360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:58.103 [2024-12-16 10:45:58.054100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:58.103 [2024-12-16 10:45:58.054121] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:58.103 [2024-12-16 10:45:58.064957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:58.103 [2024-12-16 10:45:58.065457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:58.103 [2024-12-16 10:45:58.077966] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:58.103 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:58.361 { 00:13:58.361 "ublk_device": "/dev/ublkb0", 00:13:58.361 "id": 0, 00:13:58.361 "queue_depth": 512, 00:13:58.361 "num_queues": 4, 00:13:58.361 "bdev_name": "Malloc0" 00:13:58.361 }, 00:13:58.361 { 00:13:58.361 "ublk_device": "/dev/ublkb1", 00:13:58.361 "id": 1, 00:13:58.361 "queue_depth": 512, 00:13:58.361 "num_queues": 4, 00:13:58.361 "bdev_name": "Malloc1" 00:13:58.361 }, 00:13:58.361 { 00:13:58.361 "ublk_device": "/dev/ublkb2", 00:13:58.361 "id": 2, 00:13:58.361 "queue_depth": 512, 00:13:58.361 "num_queues": 4, 00:13:58.361 "bdev_name": "Malloc2" 00:13:58.361 }, 00:13:58.361 { 00:13:58.361 "ublk_device": "/dev/ublkb3", 00:13:58.361 "id": 3, 00:13:58.361 "queue_depth": 512, 00:13:58.361 "num_queues": 4, 00:13:58.361 "bdev_name": "Malloc3" 00:13:58.361 } 00:13:58.361 ]' 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:58.361 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.620 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.878 [2024-12-16 10:45:58.761046] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:58.878 [2024-12-16 10:45:58.802347] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:58.878 [2024-12-16 10:45:58.803351] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:58.878 [2024-12-16 10:45:58.807955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:58.878 [2024-12-16 10:45:58.808186] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:58.878 [2024-12-16 10:45:58.808198] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.878 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.878 [2024-12-16 10:45:58.824022] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:58.878 [2024-12-16 10:45:58.855403] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:58.878 [2024-12-16 10:45:58.856330] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:58.878 [2024-12-16 10:45:58.862953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:58.878 [2024-12-16 10:45:58.863182] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:58.878 [2024-12-16 10:45:58.863193] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:59.136 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.136 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:59.136 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:59.136 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.136 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.137 [2024-12-16 10:45:58.879018] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:59.137 [2024-12-16 10:45:58.920382] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:59.137 [2024-12-16 10:45:58.921293] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:59.137 [2024-12-16 10:45:58.926949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:59.137 [2024-12-16 10:45:58.927170] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:59.137 [2024-12-16 10:45:58.927181] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:59.137 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.137 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:59.137 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:59.137 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.137 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.137 [2024-12-16 10:45:58.943023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:59.137 [2024-12-16 10:45:58.984291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:59.137 [2024-12-16 10:45:58.985227] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:59.137 [2024-12-16 10:45:58.990948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:59.137 [2024-12-16 10:45:58.991161] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:59.137 [2024-12-16 10:45:58.991171] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:59.137 10:45:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.137 10:45:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:59.395 [2024-12-16 10:45:59.182009] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:59.395 [2024-12-16 10:45:59.183007] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:59.395 [2024-12-16 10:45:59.183038] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.395 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.652 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:59.653 00:13:59.653 real 0m2.009s 00:13:59.653 user 0m0.828s 00:13:59.653 sys 0m0.142s 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.653 ************************************ 00:13:59.653 END TEST test_create_multi_ublk 00:13:59.653 10:45:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.653 ************************************ 00:13:59.653 10:45:59 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:59.653 10:45:59 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:59.653 10:45:59 ublk -- ublk/ublk.sh@130 -- # killprocess 82486 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@950 -- # '[' -z 82486 ']' 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@954 -- # kill -0 82486 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@955 -- # uname 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82486 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.653 killing process with pid 82486 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82486' 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@969 -- # kill 82486 00:13:59.653 10:45:59 ublk -- common/autotest_common.sh@974 -- # wait 82486 00:13:59.911 [2024-12-16 10:45:59.744508] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:59.911 [2024-12-16 10:45:59.744569] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:00.171 00:14:00.171 real 0m17.899s 00:14:00.171 user 0m28.014s 00:14:00.171 sys 0m7.744s 00:14:00.171 10:46:00 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.171 10:46:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.171 ************************************ 00:14:00.171 END TEST ublk 00:14:00.171 ************************************ 00:14:00.171 10:46:00 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:00.171 10:46:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:00.171 10:46:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.171 10:46:00 -- common/autotest_common.sh@10 -- # set +x 00:14:00.171 ************************************ 00:14:00.171 START TEST ublk_recovery 00:14:00.171 ************************************ 00:14:00.171 10:46:00 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:00.171 * Looking for test storage... 00:14:00.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:00.171 10:46:00 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:00.171 10:46:00 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:00.171 10:46:00 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:00.432 10:46:00 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:00.432 10:46:00 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.432 10:46:00 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.432 10:46:00 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.432 10:46:00 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.433 10:46:00 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:00.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.433 --rc genhtml_branch_coverage=1 00:14:00.433 --rc genhtml_function_coverage=1 00:14:00.433 --rc genhtml_legend=1 00:14:00.433 --rc geninfo_all_blocks=1 00:14:00.433 --rc geninfo_unexecuted_blocks=1 00:14:00.433 00:14:00.433 ' 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:00.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.433 --rc genhtml_branch_coverage=1 00:14:00.433 --rc genhtml_function_coverage=1 00:14:00.433 --rc genhtml_legend=1 00:14:00.433 --rc geninfo_all_blocks=1 00:14:00.433 --rc geninfo_unexecuted_blocks=1 00:14:00.433 00:14:00.433 ' 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:00.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.433 --rc genhtml_branch_coverage=1 00:14:00.433 --rc genhtml_function_coverage=1 00:14:00.433 --rc genhtml_legend=1 00:14:00.433 --rc geninfo_all_blocks=1 00:14:00.433 --rc geninfo_unexecuted_blocks=1 00:14:00.433 00:14:00.433 ' 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:00.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.433 --rc genhtml_branch_coverage=1 00:14:00.433 --rc genhtml_function_coverage=1 00:14:00.433 --rc genhtml_legend=1 00:14:00.433 --rc geninfo_all_blocks=1 00:14:00.433 --rc geninfo_unexecuted_blocks=1 00:14:00.433 00:14:00.433 ' 00:14:00.433 10:46:00 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:00.433 10:46:00 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:00.433 10:46:00 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:00.433 10:46:00 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:00.433 10:46:00 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:00.433 10:46:00 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:00.433 10:46:00 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:00.433 10:46:00 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:00.433 10:46:00 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:00.433 10:46:00 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:00.433 10:46:00 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82853 00:14:00.433 10:46:00 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.433 10:46:00 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82853 00:14:00.433 10:46:00 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82853 ']' 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.433 10:46:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.433 [2024-12-16 10:46:00.292039] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:00.433 [2024-12-16 10:46:00.292134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82853 ] 00:14:00.694 [2024-12-16 10:46:00.424509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:00.694 [2024-12-16 10:46:00.469720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.694 [2024-12-16 10:46:00.469768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:01.260 10:46:01 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.260 [2024-12-16 10:46:01.153953] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:01.260 [2024-12-16 10:46:01.155313] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.260 10:46:01 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.260 malloc0 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.260 10:46:01 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.260 [2024-12-16 10:46:01.194085] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:01.260 [2024-12-16 10:46:01.194190] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:01.260 [2024-12-16 10:46:01.194199] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:01.260 [2024-12-16 10:46:01.194208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:01.260 [2024-12-16 10:46:01.203050] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:01.260 [2024-12-16 10:46:01.203078] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:01.260 [2024-12-16 10:46:01.209961] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:01.260 [2024-12-16 10:46:01.210114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:01.260 [2024-12-16 10:46:01.226967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:01.260 1 00:14:01.260 10:46:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.260 10:46:01 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:02.633 10:46:02 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82886 00:14:02.633 10:46:02 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:02.633 10:46:02 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:02.633 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:02.633 fio-3.35 00:14:02.633 Starting 1 process 00:14:07.899 10:46:07 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82853 00:14:07.899 10:46:07 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:13.179 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82853 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:13.179 10:46:12 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82997 00:14:13.179 10:46:12 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:13.179 10:46:12 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:13.179 10:46:12 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82997 00:14:13.179 10:46:12 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82997 ']' 00:14:13.179 10:46:12 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.179 10:46:12 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.179 10:46:12 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.179 10:46:12 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.179 10:46:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.179 [2024-12-16 10:46:12.319534] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:13.179 [2024-12-16 10:46:12.319649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82997 ] 00:14:13.179 [2024-12-16 10:46:12.451984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:13.179 [2024-12-16 10:46:12.492901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.179 [2024-12-16 10:46:12.492972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.179 10:46:13 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.179 10:46:13 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:13.179 10:46:13 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:13.179 10:46:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.179 10:46:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.179 [2024-12-16 10:46:13.153948] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:13.179 [2024-12-16 10:46:13.155189] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:13.179 10:46:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.179 10:46:13 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:13.179 10:46:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.179 10:46:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.437 malloc0 00:14:13.437 10:46:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.437 10:46:13 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:13.437 10:46:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.437 10:46:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.437 [2024-12-16 10:46:13.194064] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:13.437 [2024-12-16 10:46:13.194100] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:13.437 [2024-12-16 10:46:13.194106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:13.437 [2024-12-16 10:46:13.201988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:13.437 [2024-12-16 10:46:13.202012] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:13.437 [2024-12-16 10:46:13.202024] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:13.437 [2024-12-16 10:46:13.202080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:13.437 1 00:14:13.437 10:46:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.437 10:46:13 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82886 00:14:13.438 [2024-12-16 10:46:13.209956] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:13.438 [2024-12-16 10:46:13.216478] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:13.438 [2024-12-16 10:46:13.224140] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:13.438 [2024-12-16 10:46:13.224157] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:09.659 00:15:09.659 fio_test: (groupid=0, jobs=1): err= 0: pid=82889: Mon Dec 16 10:47:02 2024 00:15:09.659 read: IOPS=26.9k, BW=105MiB/s (110MB/s)(6294MiB/60002msec) 00:15:09.659 slat (nsec): min=953, max=134316, avg=4952.76, stdev=1397.85 00:15:09.659 clat (usec): min=666, max=5992.2k, avg=2327.84, stdev=36851.56 00:15:09.659 lat (usec): min=671, max=5992.2k, avg=2332.79, stdev=36851.57 00:15:09.659 clat percentiles (usec): 00:15:09.659 | 1.00th=[ 1680], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1844], 00:15:09.659 | 30.00th=[ 1876], 40.00th=[ 1909], 50.00th=[ 1942], 60.00th=[ 1991], 00:15:09.659 | 70.00th=[ 2040], 80.00th=[ 2212], 90.00th=[ 2409], 95.00th=[ 2933], 00:15:09.659 | 99.00th=[ 4752], 99.50th=[ 5276], 99.90th=[ 6718], 99.95th=[ 7111], 00:15:09.659 | 99.99th=[12911] 00:15:09.659 bw ( KiB/s): min=24656, max=133352, per=100.00%, avg=118214.08, stdev=16066.88, samples=108 00:15:09.659 iops : min= 6164, max=33338, avg=29553.52, stdev=4016.72, samples=108 00:15:09.659 write: IOPS=26.8k, BW=105MiB/s (110MB/s)(6289MiB/60002msec); 0 zone resets 00:15:09.659 slat (nsec): min=975, max=304656, avg=5005.91, stdev=1449.20 00:15:09.659 clat (usec): min=670, max=5992.4k, avg=2429.68, stdev=38640.27 00:15:09.659 lat (usec): min=686, max=5992.4k, avg=2434.69, stdev=38640.27 00:15:09.659 clat percentiles (usec): 00:15:09.659 | 1.00th=[ 1729], 5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1926], 00:15:09.659 | 30.00th=[ 1958], 40.00th=[ 1991], 50.00th=[ 2040], 60.00th=[ 2073], 00:15:09.659 | 70.00th=[ 2147], 80.00th=[ 2311], 90.00th=[ 2507], 95.00th=[ 2900], 00:15:09.659 | 99.00th=[ 4752], 99.50th=[ 5407], 99.90th=[ 6915], 99.95th=[ 7504], 00:15:09.659 | 99.99th=[13042] 00:15:09.660 bw ( KiB/s): min=24856, max=133096, per=100.00%, avg=118110.63, stdev=15914.53, samples=108 00:15:09.660 iops : min= 6214, max=33274, avg=29527.66, stdev=3978.63, samples=108 00:15:09.660 lat (usec) : 750=0.01%, 1000=0.01% 00:15:09.660 lat (msec) : 2=52.15%, 4=45.58%, 10=2.25%, 20=0.01%, >=2000=0.01% 00:15:09.660 cpu : usr=5.90%, sys=27.16%, ctx=107067, majf=0, minf=13 00:15:09.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:09.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.660 issued rwts: total=1611339,1609934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.660 00:15:09.660 Run status group 0 (all jobs): 00:15:09.660 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6294MiB (6600MB), run=60002-60002msec 00:15:09.660 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6289MiB (6594MB), run=60002-60002msec 00:15:09.660 00:15:09.660 Disk stats (read/write): 00:15:09.660 ublkb1: ios=1607708/1606306, merge=0/0, ticks=3663884/3693656, in_queue=7357540, util=99.89% 00:15:09.660 10:47:02 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.660 [2024-12-16 10:47:02.489410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:09.660 [2024-12-16 10:47:02.532961] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:09.660 [2024-12-16 10:47:02.533097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:09.660 [2024-12-16 10:47:02.540955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:09.660 [2024-12-16 10:47:02.541058] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:09.660 [2024-12-16 10:47:02.541065] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.660 10:47:02 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.660 [2024-12-16 10:47:02.557022] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:09.660 [2024-12-16 10:47:02.557908] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:09.660 [2024-12-16 10:47:02.557946] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.660 10:47:02 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:09.660 10:47:02 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:09.660 10:47:02 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82997 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 82997 ']' 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 82997 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82997 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.660 killing process with pid 82997 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82997' 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@969 -- # kill 82997 00:15:09.660 10:47:02 ublk_recovery -- common/autotest_common.sh@974 -- # wait 82997 00:15:09.660 [2024-12-16 10:47:02.759460] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:09.660 [2024-12-16 10:47:02.759502] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:09.660 00:15:09.660 real 1m2.981s 00:15:09.660 user 1m44.213s 00:15:09.660 sys 0m30.990s 00:15:09.660 10:47:03 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:09.660 10:47:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.660 ************************************ 00:15:09.660 END TEST ublk_recovery 00:15:09.660 ************************************ 00:15:09.660 10:47:03 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:09.660 10:47:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:09.660 10:47:03 -- common/autotest_common.sh@10 -- # set +x 00:15:09.660 10:47:03 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:09.660 10:47:03 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:09.660 10:47:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:09.660 10:47:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.660 10:47:03 -- common/autotest_common.sh@10 -- # set +x 00:15:09.660 ************************************ 00:15:09.660 START TEST ftl 00:15:09.660 ************************************ 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:09.660 * Looking for test storage... 00:15:09.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:09.660 10:47:03 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.660 10:47:03 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.660 10:47:03 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.660 10:47:03 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.660 10:47:03 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.660 10:47:03 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.660 10:47:03 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.660 10:47:03 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.660 10:47:03 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.660 10:47:03 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.660 10:47:03 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.660 10:47:03 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:09.660 10:47:03 ftl -- scripts/common.sh@345 -- # : 1 00:15:09.660 10:47:03 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.660 10:47:03 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.660 10:47:03 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:09.660 10:47:03 ftl -- scripts/common.sh@353 -- # local d=1 00:15:09.660 10:47:03 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.660 10:47:03 ftl -- scripts/common.sh@355 -- # echo 1 00:15:09.660 10:47:03 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.660 10:47:03 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:09.660 10:47:03 ftl -- scripts/common.sh@353 -- # local d=2 00:15:09.660 10:47:03 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.660 10:47:03 ftl -- scripts/common.sh@355 -- # echo 2 00:15:09.660 10:47:03 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.660 10:47:03 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.660 10:47:03 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.660 10:47:03 ftl -- scripts/common.sh@368 -- # return 0 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:09.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.660 --rc genhtml_branch_coverage=1 00:15:09.660 --rc genhtml_function_coverage=1 00:15:09.660 --rc genhtml_legend=1 00:15:09.660 --rc geninfo_all_blocks=1 00:15:09.660 --rc geninfo_unexecuted_blocks=1 00:15:09.660 00:15:09.660 ' 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:09.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.660 --rc genhtml_branch_coverage=1 00:15:09.660 --rc genhtml_function_coverage=1 00:15:09.660 --rc genhtml_legend=1 00:15:09.660 --rc geninfo_all_blocks=1 00:15:09.660 --rc geninfo_unexecuted_blocks=1 00:15:09.660 00:15:09.660 ' 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:09.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.660 --rc genhtml_branch_coverage=1 00:15:09.660 --rc genhtml_function_coverage=1 00:15:09.660 --rc genhtml_legend=1 00:15:09.660 --rc geninfo_all_blocks=1 00:15:09.660 --rc geninfo_unexecuted_blocks=1 00:15:09.660 00:15:09.660 ' 00:15:09.660 10:47:03 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:09.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.660 --rc genhtml_branch_coverage=1 00:15:09.660 --rc genhtml_function_coverage=1 00:15:09.660 --rc genhtml_legend=1 00:15:09.660 --rc geninfo_all_blocks=1 00:15:09.660 --rc geninfo_unexecuted_blocks=1 00:15:09.660 00:15:09.660 ' 00:15:09.660 10:47:03 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:09.660 10:47:03 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:09.660 10:47:03 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.660 10:47:03 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.660 10:47:03 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:09.660 10:47:03 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:09.660 10:47:03 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.660 10:47:03 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:09.660 10:47:03 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:09.660 10:47:03 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.660 10:47:03 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.660 10:47:03 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:09.660 10:47:03 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:09.661 10:47:03 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.661 10:47:03 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.661 10:47:03 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:09.661 10:47:03 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:09.661 10:47:03 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.661 10:47:03 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.661 10:47:03 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:09.661 10:47:03 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:09.661 10:47:03 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.661 10:47:03 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.661 10:47:03 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.661 10:47:03 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.661 10:47:03 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:09.661 10:47:03 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:09.661 10:47:03 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.661 10:47:03 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:09.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:09.661 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:09.661 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:09.661 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:09.661 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83785 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:09.661 10:47:03 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83785 00:15:09.661 10:47:03 ftl -- common/autotest_common.sh@831 -- # '[' -z 83785 ']' 00:15:09.661 10:47:03 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.661 10:47:03 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.661 10:47:03 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.661 10:47:03 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.661 10:47:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:09.661 [2024-12-16 10:47:03.796007] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:09.661 [2024-12-16 10:47:03.796127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83785 ] 00:15:09.661 [2024-12-16 10:47:03.929158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.661 [2024-12-16 10:47:03.961773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.661 10:47:04 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.661 10:47:04 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:09.661 10:47:04 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:09.661 10:47:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@50 -- # break 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@63 -- # break 00:15:09.661 10:47:05 ftl -- ftl/ftl.sh@66 -- # killprocess 83785 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@950 -- # '[' -z 83785 ']' 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@954 -- # kill -0 83785 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@955 -- # uname 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83785 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.661 killing process with pid 83785 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83785' 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@969 -- # kill 83785 00:15:09.661 10:47:05 ftl -- common/autotest_common.sh@974 -- # wait 83785 00:15:09.661 10:47:06 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:09.661 10:47:06 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:09.661 10:47:06 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:09.661 10:47:06 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.661 10:47:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:09.661 ************************************ 00:15:09.661 START TEST ftl_fio_basic 00:15:09.661 ************************************ 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:09.661 * Looking for test storage... 00:15:09.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:09.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.661 --rc genhtml_branch_coverage=1 00:15:09.661 --rc genhtml_function_coverage=1 00:15:09.661 --rc genhtml_legend=1 00:15:09.661 --rc geninfo_all_blocks=1 00:15:09.661 --rc geninfo_unexecuted_blocks=1 00:15:09.661 00:15:09.661 ' 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:09.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.661 --rc genhtml_branch_coverage=1 00:15:09.661 --rc genhtml_function_coverage=1 00:15:09.661 --rc genhtml_legend=1 00:15:09.661 --rc geninfo_all_blocks=1 00:15:09.661 --rc geninfo_unexecuted_blocks=1 00:15:09.661 00:15:09.661 ' 00:15:09.661 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:09.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.661 --rc genhtml_branch_coverage=1 00:15:09.661 --rc genhtml_function_coverage=1 00:15:09.662 --rc genhtml_legend=1 00:15:09.662 --rc geninfo_all_blocks=1 00:15:09.662 --rc geninfo_unexecuted_blocks=1 00:15:09.662 00:15:09.662 ' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:09.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.662 --rc genhtml_branch_coverage=1 00:15:09.662 --rc genhtml_function_coverage=1 00:15:09.662 --rc genhtml_legend=1 00:15:09.662 --rc geninfo_all_blocks=1 00:15:09.662 --rc geninfo_unexecuted_blocks=1 00:15:09.662 00:15:09.662 ' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:09.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83906 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83906 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 83906 ']' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.662 10:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:09.662 [2024-12-16 10:47:06.452171] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:09.662 [2024-12-16 10:47:06.452280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83906 ] 00:15:09.662 [2024-12-16 10:47:06.590519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.662 [2024-12-16 10:47:06.624596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.662 [2024-12-16 10:47:06.624804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.662 [2024-12-16 10:47:06.624884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:09.662 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:09.662 { 00:15:09.662 "name": "nvme0n1", 00:15:09.662 "aliases": [ 00:15:09.662 "c83edd81-441b-48c3-b38a-47faeff68d36" 00:15:09.662 ], 00:15:09.662 "product_name": "NVMe disk", 00:15:09.662 "block_size": 4096, 00:15:09.662 "num_blocks": 1310720, 00:15:09.662 "uuid": "c83edd81-441b-48c3-b38a-47faeff68d36", 00:15:09.662 "numa_id": -1, 00:15:09.662 "assigned_rate_limits": { 00:15:09.662 "rw_ios_per_sec": 0, 00:15:09.662 "rw_mbytes_per_sec": 0, 00:15:09.662 "r_mbytes_per_sec": 0, 00:15:09.662 "w_mbytes_per_sec": 0 00:15:09.662 }, 00:15:09.662 "claimed": false, 00:15:09.662 "zoned": false, 00:15:09.662 "supported_io_types": { 00:15:09.662 "read": true, 00:15:09.662 "write": true, 00:15:09.662 "unmap": true, 00:15:09.662 "flush": true, 00:15:09.662 "reset": true, 00:15:09.662 "nvme_admin": true, 00:15:09.662 "nvme_io": true, 00:15:09.662 "nvme_io_md": false, 00:15:09.662 "write_zeroes": true, 00:15:09.662 "zcopy": false, 00:15:09.662 "get_zone_info": false, 00:15:09.662 "zone_management": false, 00:15:09.662 "zone_append": false, 00:15:09.662 "compare": true, 00:15:09.662 "compare_and_write": false, 00:15:09.662 "abort": true, 00:15:09.662 "seek_hole": false, 00:15:09.662 "seek_data": false, 00:15:09.662 "copy": true, 00:15:09.662 "nvme_iov_md": false 00:15:09.663 }, 00:15:09.663 "driver_specific": { 00:15:09.663 "nvme": [ 00:15:09.663 { 00:15:09.663 "pci_address": "0000:00:11.0", 00:15:09.663 "trid": { 00:15:09.663 "trtype": "PCIe", 00:15:09.663 "traddr": "0000:00:11.0" 00:15:09.663 }, 00:15:09.663 "ctrlr_data": { 00:15:09.663 "cntlid": 0, 00:15:09.663 "vendor_id": "0x1b36", 00:15:09.663 "model_number": "QEMU NVMe Ctrl", 00:15:09.663 "serial_number": "12341", 00:15:09.663 "firmware_revision": "8.0.0", 00:15:09.663 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:09.663 "oacs": { 00:15:09.663 "security": 0, 00:15:09.663 "format": 1, 00:15:09.663 "firmware": 0, 00:15:09.663 "ns_manage": 1 00:15:09.663 }, 00:15:09.663 "multi_ctrlr": false, 00:15:09.663 "ana_reporting": false 00:15:09.663 }, 00:15:09.663 "vs": { 00:15:09.663 "nvme_version": "1.4" 00:15:09.663 }, 00:15:09.663 "ns_data": { 00:15:09.663 "id": 1, 00:15:09.663 "can_share": false 00:15:09.663 } 00:15:09.663 } 00:15:09.663 ], 00:15:09.663 "mp_policy": "active_passive" 00:15:09.663 } 00:15:09.663 } 00:15:09.663 ]' 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:09.663 10:47:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3322746e-3d62-417d-aa63-6b7d0be7a114 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3322746e-3d62-417d-aa63-6b7d0be7a114 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:09.663 { 00:15:09.663 "name": "ef8afe25-2150-4d93-bd54-2fe18b2aeefe", 00:15:09.663 "aliases": [ 00:15:09.663 "lvs/nvme0n1p0" 00:15:09.663 ], 00:15:09.663 "product_name": "Logical Volume", 00:15:09.663 "block_size": 4096, 00:15:09.663 "num_blocks": 26476544, 00:15:09.663 "uuid": "ef8afe25-2150-4d93-bd54-2fe18b2aeefe", 00:15:09.663 "assigned_rate_limits": { 00:15:09.663 "rw_ios_per_sec": 0, 00:15:09.663 "rw_mbytes_per_sec": 0, 00:15:09.663 "r_mbytes_per_sec": 0, 00:15:09.663 "w_mbytes_per_sec": 0 00:15:09.663 }, 00:15:09.663 "claimed": false, 00:15:09.663 "zoned": false, 00:15:09.663 "supported_io_types": { 00:15:09.663 "read": true, 00:15:09.663 "write": true, 00:15:09.663 "unmap": true, 00:15:09.663 "flush": false, 00:15:09.663 "reset": true, 00:15:09.663 "nvme_admin": false, 00:15:09.663 "nvme_io": false, 00:15:09.663 "nvme_io_md": false, 00:15:09.663 "write_zeroes": true, 00:15:09.663 "zcopy": false, 00:15:09.663 "get_zone_info": false, 00:15:09.663 "zone_management": false, 00:15:09.663 "zone_append": false, 00:15:09.663 "compare": false, 00:15:09.663 "compare_and_write": false, 00:15:09.663 "abort": false, 00:15:09.663 "seek_hole": true, 00:15:09.663 "seek_data": true, 00:15:09.663 "copy": false, 00:15:09.663 "nvme_iov_md": false 00:15:09.663 }, 00:15:09.663 "driver_specific": { 00:15:09.663 "lvol": { 00:15:09.663 "lvol_store_uuid": "3322746e-3d62-417d-aa63-6b7d0be7a114", 00:15:09.663 "base_bdev": "nvme0n1", 00:15:09.663 "thin_provision": true, 00:15:09.663 "num_allocated_clusters": 0, 00:15:09.663 "snapshot": false, 00:15:09.663 "clone": false, 00:15:09.663 "esnap_clone": false 00:15:09.663 } 00:15:09.663 } 00:15:09.663 } 00:15:09.663 ]' 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:09.663 10:47:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.663 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:09.663 { 00:15:09.663 "name": "ef8afe25-2150-4d93-bd54-2fe18b2aeefe", 00:15:09.663 "aliases": [ 00:15:09.663 "lvs/nvme0n1p0" 00:15:09.663 ], 00:15:09.663 "product_name": "Logical Volume", 00:15:09.663 "block_size": 4096, 00:15:09.663 "num_blocks": 26476544, 00:15:09.663 "uuid": "ef8afe25-2150-4d93-bd54-2fe18b2aeefe", 00:15:09.663 "assigned_rate_limits": { 00:15:09.663 "rw_ios_per_sec": 0, 00:15:09.663 "rw_mbytes_per_sec": 0, 00:15:09.663 "r_mbytes_per_sec": 0, 00:15:09.663 "w_mbytes_per_sec": 0 00:15:09.663 }, 00:15:09.663 "claimed": false, 00:15:09.663 "zoned": false, 00:15:09.663 "supported_io_types": { 00:15:09.663 "read": true, 00:15:09.663 "write": true, 00:15:09.663 "unmap": true, 00:15:09.663 "flush": false, 00:15:09.663 "reset": true, 00:15:09.663 "nvme_admin": false, 00:15:09.663 "nvme_io": false, 00:15:09.663 "nvme_io_md": false, 00:15:09.663 "write_zeroes": true, 00:15:09.663 "zcopy": false, 00:15:09.663 "get_zone_info": false, 00:15:09.663 "zone_management": false, 00:15:09.663 "zone_append": false, 00:15:09.663 "compare": false, 00:15:09.663 "compare_and_write": false, 00:15:09.663 "abort": false, 00:15:09.663 "seek_hole": true, 00:15:09.663 "seek_data": true, 00:15:09.663 "copy": false, 00:15:09.663 "nvme_iov_md": false 00:15:09.663 }, 00:15:09.663 "driver_specific": { 00:15:09.663 "lvol": { 00:15:09.663 "lvol_store_uuid": "3322746e-3d62-417d-aa63-6b7d0be7a114", 00:15:09.663 "base_bdev": "nvme0n1", 00:15:09.663 "thin_provision": true, 00:15:09.663 "num_allocated_clusters": 0, 00:15:09.663 "snapshot": false, 00:15:09.663 "clone": false, 00:15:09.663 "esnap_clone": false 00:15:09.663 } 00:15:09.663 } 00:15:09.663 } 00:15:09.663 ]' 00:15:09.663 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:09.664 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef8afe25-2150-4d93-bd54-2fe18b2aeefe 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:09.664 { 00:15:09.664 "name": "ef8afe25-2150-4d93-bd54-2fe18b2aeefe", 00:15:09.664 "aliases": [ 00:15:09.664 "lvs/nvme0n1p0" 00:15:09.664 ], 00:15:09.664 "product_name": "Logical Volume", 00:15:09.664 "block_size": 4096, 00:15:09.664 "num_blocks": 26476544, 00:15:09.664 "uuid": "ef8afe25-2150-4d93-bd54-2fe18b2aeefe", 00:15:09.664 "assigned_rate_limits": { 00:15:09.664 "rw_ios_per_sec": 0, 00:15:09.664 "rw_mbytes_per_sec": 0, 00:15:09.664 "r_mbytes_per_sec": 0, 00:15:09.664 "w_mbytes_per_sec": 0 00:15:09.664 }, 00:15:09.664 "claimed": false, 00:15:09.664 "zoned": false, 00:15:09.664 "supported_io_types": { 00:15:09.664 "read": true, 00:15:09.664 "write": true, 00:15:09.664 "unmap": true, 00:15:09.664 "flush": false, 00:15:09.664 "reset": true, 00:15:09.664 "nvme_admin": false, 00:15:09.664 "nvme_io": false, 00:15:09.664 "nvme_io_md": false, 00:15:09.664 "write_zeroes": true, 00:15:09.664 "zcopy": false, 00:15:09.664 "get_zone_info": false, 00:15:09.664 "zone_management": false, 00:15:09.664 "zone_append": false, 00:15:09.664 "compare": false, 00:15:09.664 "compare_and_write": false, 00:15:09.664 "abort": false, 00:15:09.664 "seek_hole": true, 00:15:09.664 "seek_data": true, 00:15:09.664 "copy": false, 00:15:09.664 "nvme_iov_md": false 00:15:09.664 }, 00:15:09.664 "driver_specific": { 00:15:09.664 "lvol": { 00:15:09.664 "lvol_store_uuid": "3322746e-3d62-417d-aa63-6b7d0be7a114", 00:15:09.664 "base_bdev": "nvme0n1", 00:15:09.664 "thin_provision": true, 00:15:09.664 "num_allocated_clusters": 0, 00:15:09.664 "snapshot": false, 00:15:09.664 "clone": false, 00:15:09.664 "esnap_clone": false 00:15:09.664 } 00:15:09.664 } 00:15:09.664 } 00:15:09.664 ]' 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:09.664 10:47:09 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ef8afe25-2150-4d93-bd54-2fe18b2aeefe -c nvc0n1p0 --l2p_dram_limit 60 00:15:09.924 [2024-12-16 10:47:09.757602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.757640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:09.924 [2024-12-16 10:47:09.757657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:09.924 [2024-12-16 10:47:09.757664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.757716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.757725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:09.924 [2024-12-16 10:47:09.757733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:09.924 [2024-12-16 10:47:09.757750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.757770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:09.924 [2024-12-16 10:47:09.758036] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:09.924 [2024-12-16 10:47:09.758051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.758059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:09.924 [2024-12-16 10:47:09.758065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:15:09.924 [2024-12-16 10:47:09.758073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.758102] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 27ae515e-fdd7-4a85-b17a-e4ed3e6dc21f 00:15:09.924 [2024-12-16 10:47:09.759068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.759091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:09.924 [2024-12-16 10:47:09.759102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:15:09.924 [2024-12-16 10:47:09.759108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.763748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.763778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:09.924 [2024-12-16 10:47:09.763787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:15:09.924 [2024-12-16 10:47:09.763793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.763884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.763893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:09.924 [2024-12-16 10:47:09.763901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:09.924 [2024-12-16 10:47:09.763906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.763958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.763966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:09.924 [2024-12-16 10:47:09.763982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:09.924 [2024-12-16 10:47:09.763988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.764009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:09.924 [2024-12-16 10:47:09.765251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.765277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:09.924 [2024-12-16 10:47:09.765284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:15:09.924 [2024-12-16 10:47:09.765292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.765329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.765336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:09.924 [2024-12-16 10:47:09.765342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:09.924 [2024-12-16 10:47:09.765350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.765374] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:09.924 [2024-12-16 10:47:09.765482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:09.924 [2024-12-16 10:47:09.765499] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:09.924 [2024-12-16 10:47:09.765508] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:09.924 [2024-12-16 10:47:09.765516] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:09.924 [2024-12-16 10:47:09.765532] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:09.924 [2024-12-16 10:47:09.765538] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:09.924 [2024-12-16 10:47:09.765548] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:09.924 [2024-12-16 10:47:09.765561] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:09.924 [2024-12-16 10:47:09.765568] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:09.924 [2024-12-16 10:47:09.765573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.765580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:09.924 [2024-12-16 10:47:09.765587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:15:09.924 [2024-12-16 10:47:09.765600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.765669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.924 [2024-12-16 10:47:09.765678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:09.924 [2024-12-16 10:47:09.765684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:09.924 [2024-12-16 10:47:09.765691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.924 [2024-12-16 10:47:09.765775] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:09.924 [2024-12-16 10:47:09.765789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:09.924 [2024-12-16 10:47:09.765796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:09.924 [2024-12-16 10:47:09.765811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:09.924 [2024-12-16 10:47:09.765823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:09.924 [2024-12-16 10:47:09.765835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:09.924 [2024-12-16 10:47:09.765841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:09.924 [2024-12-16 10:47:09.765856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:09.924 [2024-12-16 10:47:09.765862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:09.924 [2024-12-16 10:47:09.765867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:09.924 [2024-12-16 10:47:09.765876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:09.924 [2024-12-16 10:47:09.765881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:09.924 [2024-12-16 10:47:09.765887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:09.924 [2024-12-16 10:47:09.765899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:09.924 [2024-12-16 10:47:09.765905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:09.924 [2024-12-16 10:47:09.765918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:09.924 [2024-12-16 10:47:09.765940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:09.924 [2024-12-16 10:47:09.765948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:09.924 [2024-12-16 10:47:09.765960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:09.924 [2024-12-16 10:47:09.765966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:09.924 [2024-12-16 10:47:09.765981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:09.924 [2024-12-16 10:47:09.765990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:09.924 [2024-12-16 10:47:09.765995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:09.924 [2024-12-16 10:47:09.766003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:09.924 [2024-12-16 10:47:09.766009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:09.924 [2024-12-16 10:47:09.766017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:09.924 [2024-12-16 10:47:09.766023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:09.924 [2024-12-16 10:47:09.766030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:09.924 [2024-12-16 10:47:09.766035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:09.924 [2024-12-16 10:47:09.766044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:09.924 [2024-12-16 10:47:09.766049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:09.925 [2024-12-16 10:47:09.766056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.925 [2024-12-16 10:47:09.766062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:09.925 [2024-12-16 10:47:09.766069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:09.925 [2024-12-16 10:47:09.766077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.925 [2024-12-16 10:47:09.766084] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:09.925 [2024-12-16 10:47:09.766090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:09.925 [2024-12-16 10:47:09.766099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:09.925 [2024-12-16 10:47:09.766106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.925 [2024-12-16 10:47:09.766120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:09.925 [2024-12-16 10:47:09.766127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:09.925 [2024-12-16 10:47:09.766134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:09.925 [2024-12-16 10:47:09.766140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:09.925 [2024-12-16 10:47:09.766147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:09.925 [2024-12-16 10:47:09.766153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:09.925 [2024-12-16 10:47:09.766162] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:09.925 [2024-12-16 10:47:09.766179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:09.925 [2024-12-16 10:47:09.766188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:09.925 [2024-12-16 10:47:09.766194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:09.925 [2024-12-16 10:47:09.766201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:09.925 [2024-12-16 10:47:09.766207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:09.925 [2024-12-16 10:47:09.766215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:09.925 [2024-12-16 10:47:09.766221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:09.925 [2024-12-16 10:47:09.766230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:09.925 [2024-12-16 10:47:09.766236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:09.925 [2024-12-16 10:47:09.766243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:09.925 [2024-12-16 10:47:09.766249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:09.925 [2024-12-16 10:47:09.766257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:09.925 [2024-12-16 10:47:09.766262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:09.925 [2024-12-16 10:47:09.766269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:09.925 [2024-12-16 10:47:09.766276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:09.925 [2024-12-16 10:47:09.766283] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:09.925 [2024-12-16 10:47:09.766297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:09.925 [2024-12-16 10:47:09.766304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:09.925 [2024-12-16 10:47:09.766309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:09.925 [2024-12-16 10:47:09.766315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:09.925 [2024-12-16 10:47:09.766322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:09.925 [2024-12-16 10:47:09.766329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.925 [2024-12-16 10:47:09.766335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:09.925 [2024-12-16 10:47:09.766344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:15:09.925 [2024-12-16 10:47:09.766356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.925 [2024-12-16 10:47:09.766405] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:09.925 [2024-12-16 10:47:09.766422] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:11.869 [2024-12-16 10:47:11.647206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.647260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:11.869 [2024-12-16 10:47:11.647274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1880.787 ms 00:15:11.869 [2024-12-16 10:47:11.647281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.663039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.663077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:11.869 [2024-12-16 10:47:11.663089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.683 ms 00:15:11.869 [2024-12-16 10:47:11.663095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.663181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.663189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:11.869 [2024-12-16 10:47:11.663197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:11.869 [2024-12-16 10:47:11.663203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.672885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.672943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:11.869 [2024-12-16 10:47:11.672960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.632 ms 00:15:11.869 [2024-12-16 10:47:11.672971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.673037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.673049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:11.869 [2024-12-16 10:47:11.673063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:11.869 [2024-12-16 10:47:11.673073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.673434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.673465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:11.869 [2024-12-16 10:47:11.673480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:15:11.869 [2024-12-16 10:47:11.673491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.673672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.673699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:11.869 [2024-12-16 10:47:11.673717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:15:11.869 [2024-12-16 10:47:11.673729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.679488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.679515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:11.869 [2024-12-16 10:47:11.679524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.703 ms 00:15:11.869 [2024-12-16 10:47:11.679529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.686007] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:11.869 [2024-12-16 10:47:11.698054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.698084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:11.869 [2024-12-16 10:47:11.698093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.461 ms 00:15:11.869 [2024-12-16 10:47:11.698100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.729731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.729769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:11.869 [2024-12-16 10:47:11.729780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.601 ms 00:15:11.869 [2024-12-16 10:47:11.729789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.729959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.729970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:11.869 [2024-12-16 10:47:11.729976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:15:11.869 [2024-12-16 10:47:11.729985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.732404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.732437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:11.869 [2024-12-16 10:47:11.732445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:15:11.869 [2024-12-16 10:47:11.732454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.734438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.734467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:11.869 [2024-12-16 10:47:11.734475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:15:11.869 [2024-12-16 10:47:11.734482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.734732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.734748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:11.869 [2024-12-16 10:47:11.734755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:15:11.869 [2024-12-16 10:47:11.734763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.753376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.753415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:11.869 [2024-12-16 10:47:11.753422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.589 ms 00:15:11.869 [2024-12-16 10:47:11.753431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.756524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.756554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:11.869 [2024-12-16 10:47:11.756563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:15:11.869 [2024-12-16 10:47:11.756571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.758989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.759016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:11.869 [2024-12-16 10:47:11.759023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:15:11.869 [2024-12-16 10:47:11.759029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.761428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.761457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:11.869 [2024-12-16 10:47:11.761465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.369 ms 00:15:11.869 [2024-12-16 10:47:11.761474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.761507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.761516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:11.869 [2024-12-16 10:47:11.761523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:11.869 [2024-12-16 10:47:11.761531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.761590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.869 [2024-12-16 10:47:11.761599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:11.869 [2024-12-16 10:47:11.761606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:11.869 [2024-12-16 10:47:11.761614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.869 [2024-12-16 10:47:11.762392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2004.458 ms, result 0 00:15:11.869 { 00:15:11.869 "name": "ftl0", 00:15:11.869 "uuid": "27ae515e-fdd7-4a85-b17a-e4ed3e6dc21f" 00:15:11.869 } 00:15:11.869 10:47:11 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:11.869 10:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:11.869 10:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:11.869 10:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:11.869 10:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:11.869 10:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:11.869 10:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:12.128 10:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:12.386 [ 00:15:12.386 { 00:15:12.386 "name": "ftl0", 00:15:12.386 "aliases": [ 00:15:12.386 "27ae515e-fdd7-4a85-b17a-e4ed3e6dc21f" 00:15:12.386 ], 00:15:12.386 "product_name": "FTL disk", 00:15:12.386 "block_size": 4096, 00:15:12.386 "num_blocks": 20971520, 00:15:12.386 "uuid": "27ae515e-fdd7-4a85-b17a-e4ed3e6dc21f", 00:15:12.386 "assigned_rate_limits": { 00:15:12.386 "rw_ios_per_sec": 0, 00:15:12.386 "rw_mbytes_per_sec": 0, 00:15:12.386 "r_mbytes_per_sec": 0, 00:15:12.386 "w_mbytes_per_sec": 0 00:15:12.386 }, 00:15:12.386 "claimed": false, 00:15:12.386 "zoned": false, 00:15:12.386 "supported_io_types": { 00:15:12.386 "read": true, 00:15:12.386 "write": true, 00:15:12.386 "unmap": true, 00:15:12.386 "flush": true, 00:15:12.386 "reset": false, 00:15:12.386 "nvme_admin": false, 00:15:12.386 "nvme_io": false, 00:15:12.386 "nvme_io_md": false, 00:15:12.386 "write_zeroes": true, 00:15:12.386 "zcopy": false, 00:15:12.386 "get_zone_info": false, 00:15:12.386 "zone_management": false, 00:15:12.386 "zone_append": false, 00:15:12.386 "compare": false, 00:15:12.386 "compare_and_write": false, 00:15:12.386 "abort": false, 00:15:12.386 "seek_hole": false, 00:15:12.386 "seek_data": false, 00:15:12.386 "copy": false, 00:15:12.386 "nvme_iov_md": false 00:15:12.386 }, 00:15:12.386 "driver_specific": { 00:15:12.386 "ftl": { 00:15:12.386 "base_bdev": "ef8afe25-2150-4d93-bd54-2fe18b2aeefe", 00:15:12.386 "cache": "nvc0n1p0" 00:15:12.386 } 00:15:12.386 } 00:15:12.386 } 00:15:12.386 ] 00:15:12.386 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:12.386 10:47:12 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:12.386 10:47:12 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:12.386 10:47:12 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:12.386 10:47:12 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:12.647 [2024-12-16 10:47:12.483159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.483195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:12.647 [2024-12-16 10:47:12.483207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:12.647 [2024-12-16 10:47:12.483213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.483240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:12.647 [2024-12-16 10:47:12.483637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.483677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:12.647 [2024-12-16 10:47:12.483685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:15:12.647 [2024-12-16 10:47:12.483692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.484035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.484066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:12.647 [2024-12-16 10:47:12.484072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:15:12.647 [2024-12-16 10:47:12.484080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.486485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.486504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:12.647 [2024-12-16 10:47:12.486512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.387 ms 00:15:12.647 [2024-12-16 10:47:12.486527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.491186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.491213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:12.647 [2024-12-16 10:47:12.491221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.638 ms 00:15:12.647 [2024-12-16 10:47:12.491230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.492592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.492627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:12.647 [2024-12-16 10:47:12.492634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.300 ms 00:15:12.647 [2024-12-16 10:47:12.492641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.495660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.495691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:12.647 [2024-12-16 10:47:12.495699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.988 ms 00:15:12.647 [2024-12-16 10:47:12.495707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.495831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.495849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:12.647 [2024-12-16 10:47:12.495856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:15:12.647 [2024-12-16 10:47:12.495872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.497097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.497125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:12.647 [2024-12-16 10:47:12.497132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.204 ms 00:15:12.647 [2024-12-16 10:47:12.497139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.498017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.498048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:12.647 [2024-12-16 10:47:12.498055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:15:12.647 [2024-12-16 10:47:12.498062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.498820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.498850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:12.647 [2024-12-16 10:47:12.498857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:15:12.647 [2024-12-16 10:47:12.498864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.499627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.647 [2024-12-16 10:47:12.499656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:12.647 [2024-12-16 10:47:12.499663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:15:12.647 [2024-12-16 10:47:12.499670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.647 [2024-12-16 10:47:12.499700] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:12.647 [2024-12-16 10:47:12.499712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.499992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.500001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.500009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.500015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:12.647 [2024-12-16 10:47:12.500022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:12.648 [2024-12-16 10:47:12.500384] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:12.648 [2024-12-16 10:47:12.500390] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 27ae515e-fdd7-4a85-b17a-e4ed3e6dc21f 00:15:12.648 [2024-12-16 10:47:12.500397] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:12.648 [2024-12-16 10:47:12.500402] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:12.648 [2024-12-16 10:47:12.500410] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:12.648 [2024-12-16 10:47:12.500417] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:12.648 [2024-12-16 10:47:12.500424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:12.648 [2024-12-16 10:47:12.500430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:12.648 [2024-12-16 10:47:12.500437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:12.648 [2024-12-16 10:47:12.500442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:12.648 [2024-12-16 10:47:12.500448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:12.648 [2024-12-16 10:47:12.500454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.648 [2024-12-16 10:47:12.500460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:12.648 [2024-12-16 10:47:12.500467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:15:12.648 [2024-12-16 10:47:12.500473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.648 [2024-12-16 10:47:12.501788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.648 [2024-12-16 10:47:12.501812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:12.648 [2024-12-16 10:47:12.501819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:15:12.648 [2024-12-16 10:47:12.501834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.648 [2024-12-16 10:47:12.501916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.648 [2024-12-16 10:47:12.501925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:12.648 [2024-12-16 10:47:12.501942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:15:12.648 [2024-12-16 10:47:12.501949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.648 [2024-12-16 10:47:12.506553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.648 [2024-12-16 10:47:12.506585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:12.648 [2024-12-16 10:47:12.506593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.648 [2024-12-16 10:47:12.506601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.648 [2024-12-16 10:47:12.506643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.648 [2024-12-16 10:47:12.506651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:12.648 [2024-12-16 10:47:12.506658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.648 [2024-12-16 10:47:12.506665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.648 [2024-12-16 10:47:12.506718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.648 [2024-12-16 10:47:12.506731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:12.648 [2024-12-16 10:47:12.506737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.648 [2024-12-16 10:47:12.506752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.648 [2024-12-16 10:47:12.506770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.648 [2024-12-16 10:47:12.506778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:12.648 [2024-12-16 10:47:12.506784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.648 [2024-12-16 10:47:12.506791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.648 [2024-12-16 10:47:12.515120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.648 [2024-12-16 10:47:12.515154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:12.648 [2024-12-16 10:47:12.515161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.648 [2024-12-16 10:47:12.515169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.648 [2024-12-16 10:47:12.521926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.648 [2024-12-16 10:47:12.522014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:12.648 [2024-12-16 10:47:12.522022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.649 [2024-12-16 10:47:12.522030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.649 [2024-12-16 10:47:12.522087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.649 [2024-12-16 10:47:12.522099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:12.649 [2024-12-16 10:47:12.522106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.649 [2024-12-16 10:47:12.522113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.649 [2024-12-16 10:47:12.522168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.649 [2024-12-16 10:47:12.522176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:12.649 [2024-12-16 10:47:12.522182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.649 [2024-12-16 10:47:12.522189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.649 [2024-12-16 10:47:12.522250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.649 [2024-12-16 10:47:12.522263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:12.649 [2024-12-16 10:47:12.522270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.649 [2024-12-16 10:47:12.522286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.649 [2024-12-16 10:47:12.522330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.649 [2024-12-16 10:47:12.522339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:12.649 [2024-12-16 10:47:12.522345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.649 [2024-12-16 10:47:12.522352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.649 [2024-12-16 10:47:12.522386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.649 [2024-12-16 10:47:12.522396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:12.649 [2024-12-16 10:47:12.522401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.649 [2024-12-16 10:47:12.522409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.649 [2024-12-16 10:47:12.522447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:12.649 [2024-12-16 10:47:12.522459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:12.649 [2024-12-16 10:47:12.522465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:12.649 [2024-12-16 10:47:12.522472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.649 [2024-12-16 10:47:12.522601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 39.404 ms, result 0 00:15:12.649 true 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83906 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 83906 ']' 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 83906 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83906 00:15:12.649 killing process with pid 83906 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83906' 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 83906 00:15:12.649 10:47:12 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 83906 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:17.913 10:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:17.913 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:17.913 fio-3.35 00:15:17.913 Starting 1 thread 00:15:21.199 00:15:21.199 test: (groupid=0, jobs=1): err= 0: pid=84058: Mon Dec 16 10:47:20 2024 00:15:21.199 read: IOPS=1372, BW=91.2MiB/s (95.6MB/s)(255MiB/2792msec) 00:15:21.199 slat (nsec): min=2923, max=18414, avg=3622.85, stdev=1484.26 00:15:21.199 clat (usec): min=233, max=867, avg=331.73, stdev=47.94 00:15:21.199 lat (usec): min=236, max=876, avg=335.35, stdev=48.59 00:15:21.199 clat percentiles (usec): 00:15:21.199 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 293], 20.00th=[ 318], 00:15:21.199 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 322], 00:15:21.199 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 429], 00:15:21.199 | 99.00th=[ 545], 99.50th=[ 611], 99.90th=[ 799], 99.95th=[ 807], 00:15:21.199 | 99.99th=[ 865] 00:15:21.199 write: IOPS=1383, BW=91.8MiB/s (96.3MB/s)(256MiB/2788msec); 0 zone resets 00:15:21.199 slat (nsec): min=13614, max=94929, avg=16674.35, stdev=2806.63 00:15:21.199 clat (usec): min=265, max=1129, avg=360.79, stdev=65.17 00:15:21.199 lat (usec): min=281, max=1145, avg=377.47, stdev=65.78 00:15:21.199 clat percentiles (usec): 00:15:21.199 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 314], 20.00th=[ 338], 00:15:21.199 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 347], 60.00th=[ 351], 00:15:21.199 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 404], 95.00th=[ 449], 00:15:21.199 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 930], 99.95th=[ 971], 00:15:21.199 | 99.99th=[ 1123] 00:15:21.199 bw ( KiB/s): min=91752, max=98328, per=100.00%, avg=94456.00, stdev=2480.96, samples=5 00:15:21.199 iops : min= 1349, max= 1446, avg=1389.00, stdev=36.57, samples=5 00:15:21.199 lat (usec) : 250=0.04%, 500=97.20%, 750=2.52%, 1000=0.22% 00:15:21.199 lat (msec) : 2=0.01% 00:15:21.199 cpu : usr=99.39%, sys=0.04%, ctx=7, majf=0, minf=1326 00:15:21.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:21.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.199 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:21.199 00:15:21.199 Run status group 0 (all jobs): 00:15:21.199 READ: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=255MiB (267MB), run=2792-2792msec 00:15:21.199 WRITE: bw=91.8MiB/s (96.3MB/s), 91.8MiB/s-91.8MiB/s (96.3MB/s-96.3MB/s), io=256MiB (269MB), run=2788-2788msec 00:15:21.772 ----------------------------------------------------- 00:15:21.772 Suppressions used: 00:15:21.772 count bytes template 00:15:21.772 1 5 /usr/src/fio/parse.c 00:15:21.772 1 8 libtcmalloc_minimal.so 00:15:21.772 1 904 libcrypto.so 00:15:21.772 ----------------------------------------------------- 00:15:21.772 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.772 10:47:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:22.033 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:22.033 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:22.033 fio-3.35 00:15:22.033 Starting 2 threads 00:15:48.580 00:15:48.580 first_half: (groupid=0, jobs=1): err= 0: pid=84139: Mon Dec 16 10:47:44 2024 00:15:48.580 read: IOPS=2993, BW=11.7MiB/s (12.3MB/s)(255MiB/21796msec) 00:15:48.580 slat (usec): min=3, max=313, avg= 4.37, stdev= 2.27 00:15:48.580 clat (usec): min=579, max=358648, avg=33805.82, stdev=17314.39 00:15:48.580 lat (usec): min=585, max=358653, avg=33810.19, stdev=17314.43 00:15:48.580 clat percentiles (msec): 00:15:48.580 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:15:48.580 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:15:48.580 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 46], 00:15:48.580 | 99.00th=[ 124], 99.50th=[ 146], 99.90th=[ 188], 99.95th=[ 296], 00:15:48.580 | 99.99th=[ 351] 00:15:48.580 write: IOPS=3618, BW=14.1MiB/s (14.8MB/s)(256MiB/18109msec); 0 zone resets 00:15:48.580 slat (usec): min=3, max=1113, avg= 6.20, stdev= 8.20 00:15:48.580 clat (usec): min=345, max=75860, avg=8876.88, stdev=14419.29 00:15:48.580 lat (usec): min=359, max=75866, avg=8883.08, stdev=14419.38 00:15:48.580 clat percentiles (usec): 00:15:48.580 | 1.00th=[ 644], 5.00th=[ 725], 10.00th=[ 799], 20.00th=[ 1074], 00:15:48.580 | 30.00th=[ 1991], 40.00th=[ 3163], 50.00th=[ 4293], 60.00th=[ 5080], 00:15:48.580 | 70.00th=[ 5735], 80.00th=[11076], 90.00th=[18482], 95.00th=[55837], 00:15:48.580 | 99.00th=[64750], 99.50th=[68682], 99.90th=[73925], 99.95th=[74974], 00:15:48.580 | 99.99th=[74974] 00:15:48.580 bw ( KiB/s): min= 920, max=40672, per=94.89%, avg=24966.10, stdev=12966.81, samples=21 00:15:48.580 iops : min= 230, max=10168, avg=6241.52, stdev=3241.72, samples=21 00:15:48.580 lat (usec) : 500=0.03%, 750=3.49%, 1000=5.20% 00:15:48.580 lat (msec) : 2=6.55%, 4=8.94%, 10=15.75%, 20=6.63%, 50=48.20% 00:15:48.580 lat (msec) : 100=4.34%, 250=0.82%, 500=0.03% 00:15:48.580 cpu : usr=98.68%, sys=0.43%, ctx=103, majf=0, minf=5569 00:15:48.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:48.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.580 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.580 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.580 second_half: (groupid=0, jobs=1): err= 0: pid=84140: Mon Dec 16 10:47:44 2024 00:15:48.580 read: IOPS=2978, BW=11.6MiB/s (12.2MB/s)(255MiB/21927msec) 00:15:48.580 slat (nsec): min=3038, max=31054, avg=3791.51, stdev=712.23 00:15:48.580 clat (usec): min=589, max=365037, avg=33149.76, stdev=18939.76 00:15:48.580 lat (usec): min=593, max=365042, avg=33153.55, stdev=18939.83 00:15:48.580 clat percentiles (msec): 00:15:48.580 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 30], 00:15:48.580 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:15:48.580 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:15:48.580 | 99.00th=[ 132], 99.50th=[ 153], 99.90th=[ 224], 99.95th=[ 284], 00:15:48.580 | 99.99th=[ 359] 00:15:48.580 write: IOPS=3288, BW=12.8MiB/s (13.5MB/s)(256MiB/19928msec); 0 zone resets 00:15:48.580 slat (usec): min=3, max=1054, avg= 5.32, stdev= 7.81 00:15:48.580 clat (usec): min=348, max=76263, avg=9769.05, stdev=15283.93 00:15:48.580 lat (usec): min=356, max=76268, avg=9774.38, stdev=15284.03 00:15:48.580 clat percentiles (usec): 00:15:48.580 | 1.00th=[ 635], 5.00th=[ 742], 10.00th=[ 840], 20.00th=[ 1090], 00:15:48.580 | 30.00th=[ 1483], 40.00th=[ 3032], 50.00th=[ 3982], 60.00th=[ 4948], 00:15:48.580 | 70.00th=[ 6325], 80.00th=[13698], 90.00th=[26870], 95.00th=[56886], 00:15:48.580 | 99.00th=[64750], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:15:48.580 | 99.99th=[76022] 00:15:48.580 bw ( KiB/s): min= 912, max=61680, per=86.64%, avg=22794.65, stdev=16861.36, samples=23 00:15:48.580 iops : min= 228, max=15420, avg=5698.65, stdev=4215.35, samples=23 00:15:48.580 lat (usec) : 500=0.03%, 750=2.63%, 1000=5.87% 00:15:48.580 lat (msec) : 2=7.81%, 4=8.98%, 10=14.24%, 20=6.32%, 50=48.85% 00:15:48.580 lat (msec) : 100=4.39%, 250=0.84%, 500=0.04% 00:15:48.580 cpu : usr=99.30%, sys=0.11%, ctx=50, majf=0, minf=5575 00:15:48.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:48.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.580 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.580 issued rwts: total=65320,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.580 00:15:48.580 Run status group 0 (all jobs): 00:15:48.580 READ: bw=23.3MiB/s (24.4MB/s), 11.6MiB/s-11.7MiB/s (12.2MB/s-12.3MB/s), io=510MiB (535MB), run=21796-21927msec 00:15:48.580 WRITE: bw=25.7MiB/s (26.9MB/s), 12.8MiB/s-14.1MiB/s (13.5MB/s-14.8MB/s), io=512MiB (537MB), run=18109-19928msec 00:15:48.580 ----------------------------------------------------- 00:15:48.580 Suppressions used: 00:15:48.580 count bytes template 00:15:48.580 2 10 /usr/src/fio/parse.c 00:15:48.580 2 192 /usr/src/fio/iolog.c 00:15:48.580 1 8 libtcmalloc_minimal.so 00:15:48.580 1 904 libcrypto.so 00:15:48.580 ----------------------------------------------------- 00:15:48.580 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:48.580 10:47:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:48.580 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:48.580 fio-3.35 00:15:48.580 Starting 1 thread 00:16:00.806 00:16:00.806 test: (groupid=0, jobs=1): err= 0: pid=84419: Mon Dec 16 10:48:00 2024 00:16:00.806 read: IOPS=6984, BW=27.3MiB/s (28.6MB/s)(255MiB/9335msec) 00:16:00.806 slat (nsec): min=3029, max=40144, avg=5018.88, stdev=2005.31 00:16:00.806 clat (usec): min=754, max=34666, avg=18316.67, stdev=3434.55 00:16:00.806 lat (usec): min=764, max=34671, avg=18321.69, stdev=3435.34 00:16:00.806 clat percentiles (usec): 00:16:00.806 | 1.00th=[13042], 5.00th=[14746], 10.00th=[15008], 20.00th=[15401], 00:16:00.806 | 30.00th=[15533], 40.00th=[15926], 50.00th=[17433], 60.00th=[19006], 00:16:00.806 | 70.00th=[20055], 80.00th=[21365], 90.00th=[22938], 95.00th=[24511], 00:16:00.806 | 99.00th=[28443], 99.50th=[29492], 99.90th=[32113], 99.95th=[33162], 00:16:00.806 | 99.99th=[33817] 00:16:00.806 write: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(256MiB/5198msec); 0 zone resets 00:16:00.806 slat (usec): min=3, max=949, avg= 6.73, stdev= 5.66 00:16:00.806 clat (usec): min=446, max=71740, avg=10102.56, stdev=13568.96 00:16:00.806 lat (usec): min=451, max=71745, avg=10109.28, stdev=13568.92 00:16:00.806 clat percentiles (usec): 00:16:00.806 | 1.00th=[ 586], 5.00th=[ 750], 10.00th=[ 930], 20.00th=[ 1352], 00:16:00.806 | 30.00th=[ 1795], 40.00th=[ 2769], 50.00th=[ 5080], 60.00th=[ 6521], 00:16:00.806 | 70.00th=[ 8455], 80.00th=[13304], 90.00th=[35390], 95.00th=[45351], 00:16:00.806 | 99.00th=[52691], 99.50th=[55313], 99.90th=[64750], 99.95th=[68682], 00:16:00.806 | 99.99th=[69731] 00:16:00.806 bw ( KiB/s): min=14056, max=74912, per=94.51%, avg=47662.55, stdev=17791.18, samples=11 00:16:00.806 iops : min= 3514, max=18728, avg=11915.64, stdev=4447.80, samples=11 00:16:00.806 lat (usec) : 500=0.01%, 750=2.48%, 1000=3.48% 00:16:00.806 lat (msec) : 2=10.86%, 4=4.63%, 10=16.14%, 20=38.99%, 50=22.35% 00:16:00.806 lat (msec) : 100=1.06% 00:16:00.806 cpu : usr=98.91%, sys=0.24%, ctx=24, majf=0, minf=5577 00:16:00.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:00.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.806 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.806 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.806 00:16:00.806 Run status group 0 (all jobs): 00:16:00.806 READ: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=255MiB (267MB), run=9335-9335msec 00:16:00.806 WRITE: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=256MiB (268MB), run=5198-5198msec 00:16:01.745 ----------------------------------------------------- 00:16:01.745 Suppressions used: 00:16:01.745 count bytes template 00:16:01.745 1 5 /usr/src/fio/parse.c 00:16:01.745 2 192 /usr/src/fio/iolog.c 00:16:01.745 1 8 libtcmalloc_minimal.so 00:16:01.745 1 904 libcrypto.so 00:16:01.745 ----------------------------------------------------- 00:16:01.745 00:16:01.745 10:48:01 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:01.745 10:48:01 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:01.745 10:48:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:02.005 Remove shared memory files 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69457 /dev/shm/spdk_tgt_trace.pid82853 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:02.005 ************************************ 00:16:02.005 END TEST ftl_fio_basic 00:16:02.005 ************************************ 00:16:02.005 00:16:02.005 real 0m55.539s 00:16:02.005 user 2m0.997s 00:16:02.005 sys 0m2.445s 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.005 10:48:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.005 10:48:01 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:02.005 10:48:01 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:02.005 10:48:01 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.005 10:48:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:02.005 ************************************ 00:16:02.005 START TEST ftl_bdevperf 00:16:02.005 ************************************ 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:02.005 * Looking for test storage... 00:16:02.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:02.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.005 --rc genhtml_branch_coverage=1 00:16:02.005 --rc genhtml_function_coverage=1 00:16:02.005 --rc genhtml_legend=1 00:16:02.005 --rc geninfo_all_blocks=1 00:16:02.005 --rc geninfo_unexecuted_blocks=1 00:16:02.005 00:16:02.005 ' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:02.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.005 --rc genhtml_branch_coverage=1 00:16:02.005 --rc genhtml_function_coverage=1 00:16:02.005 --rc genhtml_legend=1 00:16:02.005 --rc geninfo_all_blocks=1 00:16:02.005 --rc geninfo_unexecuted_blocks=1 00:16:02.005 00:16:02.005 ' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:02.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.005 --rc genhtml_branch_coverage=1 00:16:02.005 --rc genhtml_function_coverage=1 00:16:02.005 --rc genhtml_legend=1 00:16:02.005 --rc geninfo_all_blocks=1 00:16:02.005 --rc geninfo_unexecuted_blocks=1 00:16:02.005 00:16:02.005 ' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:02.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.005 --rc genhtml_branch_coverage=1 00:16:02.005 --rc genhtml_function_coverage=1 00:16:02.005 --rc genhtml_legend=1 00:16:02.005 --rc geninfo_all_blocks=1 00:16:02.005 --rc geninfo_unexecuted_blocks=1 00:16:02.005 00:16:02.005 ' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84657 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84657 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84657 ']' 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.005 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.006 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.006 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.006 10:48:01 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:02.267 [2024-12-16 10:48:02.069760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:02.267 [2024-12-16 10:48:02.070109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84657 ] 00:16:02.267 [2024-12-16 10:48:02.208975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.529 [2024-12-16 10:48:02.260493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.099 10:48:02 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.099 10:48:02 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:03.099 10:48:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:03.099 10:48:02 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:03.099 10:48:02 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:03.099 10:48:02 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:03.099 10:48:02 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:03.099 10:48:02 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:03.361 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:03.361 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:03.361 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:03.361 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:03.361 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:03.361 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:03.361 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:03.361 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:03.621 { 00:16:03.621 "name": "nvme0n1", 00:16:03.621 "aliases": [ 00:16:03.621 "217f2659-8502-44ef-8468-772c6c9faace" 00:16:03.621 ], 00:16:03.621 "product_name": "NVMe disk", 00:16:03.621 "block_size": 4096, 00:16:03.621 "num_blocks": 1310720, 00:16:03.621 "uuid": "217f2659-8502-44ef-8468-772c6c9faace", 00:16:03.621 "numa_id": -1, 00:16:03.621 "assigned_rate_limits": { 00:16:03.621 "rw_ios_per_sec": 0, 00:16:03.621 "rw_mbytes_per_sec": 0, 00:16:03.621 "r_mbytes_per_sec": 0, 00:16:03.621 "w_mbytes_per_sec": 0 00:16:03.621 }, 00:16:03.621 "claimed": true, 00:16:03.621 "claim_type": "read_many_write_one", 00:16:03.621 "zoned": false, 00:16:03.621 "supported_io_types": { 00:16:03.621 "read": true, 00:16:03.621 "write": true, 00:16:03.621 "unmap": true, 00:16:03.621 "flush": true, 00:16:03.621 "reset": true, 00:16:03.621 "nvme_admin": true, 00:16:03.621 "nvme_io": true, 00:16:03.621 "nvme_io_md": false, 00:16:03.621 "write_zeroes": true, 00:16:03.621 "zcopy": false, 00:16:03.621 "get_zone_info": false, 00:16:03.621 "zone_management": false, 00:16:03.621 "zone_append": false, 00:16:03.621 "compare": true, 00:16:03.621 "compare_and_write": false, 00:16:03.621 "abort": true, 00:16:03.621 "seek_hole": false, 00:16:03.621 "seek_data": false, 00:16:03.621 "copy": true, 00:16:03.621 "nvme_iov_md": false 00:16:03.621 }, 00:16:03.621 "driver_specific": { 00:16:03.621 "nvme": [ 00:16:03.621 { 00:16:03.621 "pci_address": "0000:00:11.0", 00:16:03.621 "trid": { 00:16:03.621 "trtype": "PCIe", 00:16:03.621 "traddr": "0000:00:11.0" 00:16:03.621 }, 00:16:03.621 "ctrlr_data": { 00:16:03.621 "cntlid": 0, 00:16:03.621 "vendor_id": "0x1b36", 00:16:03.621 "model_number": "QEMU NVMe Ctrl", 00:16:03.621 "serial_number": "12341", 00:16:03.621 "firmware_revision": "8.0.0", 00:16:03.621 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:03.621 "oacs": { 00:16:03.621 "security": 0, 00:16:03.621 "format": 1, 00:16:03.621 "firmware": 0, 00:16:03.621 "ns_manage": 1 00:16:03.621 }, 00:16:03.621 "multi_ctrlr": false, 00:16:03.621 "ana_reporting": false 00:16:03.621 }, 00:16:03.621 "vs": { 00:16:03.621 "nvme_version": "1.4" 00:16:03.621 }, 00:16:03.621 "ns_data": { 00:16:03.621 "id": 1, 00:16:03.621 "can_share": false 00:16:03.621 } 00:16:03.621 } 00:16:03.621 ], 00:16:03.621 "mp_policy": "active_passive" 00:16:03.621 } 00:16:03.621 } 00:16:03.621 ]' 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:03.621 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:03.911 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3322746e-3d62-417d-aa63-6b7d0be7a114 00:16:03.911 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:03.911 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3322746e-3d62-417d-aa63-6b7d0be7a114 00:16:04.191 10:48:03 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:04.192 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f5536b6c-af4e-4430-8c05-71626cbcabcd 00:16:04.192 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f5536b6c-af4e-4430-8c05-71626cbcabcd 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:04.453 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:04.713 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:04.713 { 00:16:04.713 "name": "7ec97dd6-b71a-43d0-8bd8-05da6d45617d", 00:16:04.713 "aliases": [ 00:16:04.713 "lvs/nvme0n1p0" 00:16:04.713 ], 00:16:04.713 "product_name": "Logical Volume", 00:16:04.713 "block_size": 4096, 00:16:04.713 "num_blocks": 26476544, 00:16:04.714 "uuid": "7ec97dd6-b71a-43d0-8bd8-05da6d45617d", 00:16:04.714 "assigned_rate_limits": { 00:16:04.714 "rw_ios_per_sec": 0, 00:16:04.714 "rw_mbytes_per_sec": 0, 00:16:04.714 "r_mbytes_per_sec": 0, 00:16:04.714 "w_mbytes_per_sec": 0 00:16:04.714 }, 00:16:04.714 "claimed": false, 00:16:04.714 "zoned": false, 00:16:04.714 "supported_io_types": { 00:16:04.714 "read": true, 00:16:04.714 "write": true, 00:16:04.714 "unmap": true, 00:16:04.714 "flush": false, 00:16:04.714 "reset": true, 00:16:04.714 "nvme_admin": false, 00:16:04.714 "nvme_io": false, 00:16:04.714 "nvme_io_md": false, 00:16:04.714 "write_zeroes": true, 00:16:04.714 "zcopy": false, 00:16:04.714 "get_zone_info": false, 00:16:04.714 "zone_management": false, 00:16:04.714 "zone_append": false, 00:16:04.714 "compare": false, 00:16:04.714 "compare_and_write": false, 00:16:04.714 "abort": false, 00:16:04.714 "seek_hole": true, 00:16:04.714 "seek_data": true, 00:16:04.714 "copy": false, 00:16:04.714 "nvme_iov_md": false 00:16:04.714 }, 00:16:04.714 "driver_specific": { 00:16:04.714 "lvol": { 00:16:04.714 "lvol_store_uuid": "f5536b6c-af4e-4430-8c05-71626cbcabcd", 00:16:04.714 "base_bdev": "nvme0n1", 00:16:04.714 "thin_provision": true, 00:16:04.714 "num_allocated_clusters": 0, 00:16:04.714 "snapshot": false, 00:16:04.714 "clone": false, 00:16:04.714 "esnap_clone": false 00:16:04.714 } 00:16:04.714 } 00:16:04.714 } 00:16:04.714 ]' 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:04.714 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:04.974 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:04.974 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:04.974 10:48:04 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:04.974 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:04.974 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:04.974 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:04.974 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:04.974 10:48:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:05.236 { 00:16:05.236 "name": "7ec97dd6-b71a-43d0-8bd8-05da6d45617d", 00:16:05.236 "aliases": [ 00:16:05.236 "lvs/nvme0n1p0" 00:16:05.236 ], 00:16:05.236 "product_name": "Logical Volume", 00:16:05.236 "block_size": 4096, 00:16:05.236 "num_blocks": 26476544, 00:16:05.236 "uuid": "7ec97dd6-b71a-43d0-8bd8-05da6d45617d", 00:16:05.236 "assigned_rate_limits": { 00:16:05.236 "rw_ios_per_sec": 0, 00:16:05.236 "rw_mbytes_per_sec": 0, 00:16:05.236 "r_mbytes_per_sec": 0, 00:16:05.236 "w_mbytes_per_sec": 0 00:16:05.236 }, 00:16:05.236 "claimed": false, 00:16:05.236 "zoned": false, 00:16:05.236 "supported_io_types": { 00:16:05.236 "read": true, 00:16:05.236 "write": true, 00:16:05.236 "unmap": true, 00:16:05.236 "flush": false, 00:16:05.236 "reset": true, 00:16:05.236 "nvme_admin": false, 00:16:05.236 "nvme_io": false, 00:16:05.236 "nvme_io_md": false, 00:16:05.236 "write_zeroes": true, 00:16:05.236 "zcopy": false, 00:16:05.236 "get_zone_info": false, 00:16:05.236 "zone_management": false, 00:16:05.236 "zone_append": false, 00:16:05.236 "compare": false, 00:16:05.236 "compare_and_write": false, 00:16:05.236 "abort": false, 00:16:05.236 "seek_hole": true, 00:16:05.236 "seek_data": true, 00:16:05.236 "copy": false, 00:16:05.236 "nvme_iov_md": false 00:16:05.236 }, 00:16:05.236 "driver_specific": { 00:16:05.236 "lvol": { 00:16:05.236 "lvol_store_uuid": "f5536b6c-af4e-4430-8c05-71626cbcabcd", 00:16:05.236 "base_bdev": "nvme0n1", 00:16:05.236 "thin_provision": true, 00:16:05.236 "num_allocated_clusters": 0, 00:16:05.236 "snapshot": false, 00:16:05.236 "clone": false, 00:16:05.236 "esnap_clone": false 00:16:05.236 } 00:16:05.236 } 00:16:05.236 } 00:16:05.236 ]' 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:05.236 10:48:05 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:05.494 10:48:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:05.494 10:48:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:05.494 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:05.494 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:05.494 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:05.495 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:05.495 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ec97dd6-b71a-43d0-8bd8-05da6d45617d 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:05.754 { 00:16:05.754 "name": "7ec97dd6-b71a-43d0-8bd8-05da6d45617d", 00:16:05.754 "aliases": [ 00:16:05.754 "lvs/nvme0n1p0" 00:16:05.754 ], 00:16:05.754 "product_name": "Logical Volume", 00:16:05.754 "block_size": 4096, 00:16:05.754 "num_blocks": 26476544, 00:16:05.754 "uuid": "7ec97dd6-b71a-43d0-8bd8-05da6d45617d", 00:16:05.754 "assigned_rate_limits": { 00:16:05.754 "rw_ios_per_sec": 0, 00:16:05.754 "rw_mbytes_per_sec": 0, 00:16:05.754 "r_mbytes_per_sec": 0, 00:16:05.754 "w_mbytes_per_sec": 0 00:16:05.754 }, 00:16:05.754 "claimed": false, 00:16:05.754 "zoned": false, 00:16:05.754 "supported_io_types": { 00:16:05.754 "read": true, 00:16:05.754 "write": true, 00:16:05.754 "unmap": true, 00:16:05.754 "flush": false, 00:16:05.754 "reset": true, 00:16:05.754 "nvme_admin": false, 00:16:05.754 "nvme_io": false, 00:16:05.754 "nvme_io_md": false, 00:16:05.754 "write_zeroes": true, 00:16:05.754 "zcopy": false, 00:16:05.754 "get_zone_info": false, 00:16:05.754 "zone_management": false, 00:16:05.754 "zone_append": false, 00:16:05.754 "compare": false, 00:16:05.754 "compare_and_write": false, 00:16:05.754 "abort": false, 00:16:05.754 "seek_hole": true, 00:16:05.754 "seek_data": true, 00:16:05.754 "copy": false, 00:16:05.754 "nvme_iov_md": false 00:16:05.754 }, 00:16:05.754 "driver_specific": { 00:16:05.754 "lvol": { 00:16:05.754 "lvol_store_uuid": "f5536b6c-af4e-4430-8c05-71626cbcabcd", 00:16:05.754 "base_bdev": "nvme0n1", 00:16:05.754 "thin_provision": true, 00:16:05.754 "num_allocated_clusters": 0, 00:16:05.754 "snapshot": false, 00:16:05.754 "clone": false, 00:16:05.754 "esnap_clone": false 00:16:05.754 } 00:16:05.754 } 00:16:05.754 } 00:16:05.754 ]' 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:05.754 10:48:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7ec97dd6-b71a-43d0-8bd8-05da6d45617d -c nvc0n1p0 --l2p_dram_limit 20 00:16:06.015 [2024-12-16 10:48:05.819774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.820057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:06.015 [2024-12-16 10:48:05.820088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:06.015 [2024-12-16 10:48:05.820101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.820187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.820197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:06.015 [2024-12-16 10:48:05.820216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:06.015 [2024-12-16 10:48:05.820230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.820252] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:06.015 [2024-12-16 10:48:05.820571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:06.015 [2024-12-16 10:48:05.820591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.820601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:06.015 [2024-12-16 10:48:05.820614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:16:06.015 [2024-12-16 10:48:05.820624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.820664] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f9598845-bf16-4f0d-945b-060b25bf811d 00:16:06.015 [2024-12-16 10:48:05.822446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.822496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:06.015 [2024-12-16 10:48:05.822508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:06.015 [2024-12-16 10:48:05.822518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.831243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.831432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:06.015 [2024-12-16 10:48:05.831450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.648 ms 00:16:06.015 [2024-12-16 10:48:05.831467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.831551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.831562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:06.015 [2024-12-16 10:48:05.831572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:06.015 [2024-12-16 10:48:05.831583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.831645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.831661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:06.015 [2024-12-16 10:48:05.831674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:06.015 [2024-12-16 10:48:05.831684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.831708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:06.015 [2024-12-16 10:48:05.833899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.833962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:06.015 [2024-12-16 10:48:05.833975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:16:06.015 [2024-12-16 10:48:05.833988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.834029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.834041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:06.015 [2024-12-16 10:48:05.834058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:06.015 [2024-12-16 10:48:05.834065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.834082] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:06.015 [2024-12-16 10:48:05.834238] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:06.015 [2024-12-16 10:48:05.834253] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:06.015 [2024-12-16 10:48:05.834269] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:06.015 [2024-12-16 10:48:05.834285] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834295] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834305] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:06.015 [2024-12-16 10:48:05.834313] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:06.015 [2024-12-16 10:48:05.834323] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:06.015 [2024-12-16 10:48:05.834334] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:06.015 [2024-12-16 10:48:05.834344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.834351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:06.015 [2024-12-16 10:48:05.834363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:16:06.015 [2024-12-16 10:48:05.834372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.834456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.015 [2024-12-16 10:48:05.834465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:06.015 [2024-12-16 10:48:05.834474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:06.015 [2024-12-16 10:48:05.834481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.015 [2024-12-16 10:48:05.834577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:06.015 [2024-12-16 10:48:05.834587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:06.015 [2024-12-16 10:48:05.834598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:06.015 [2024-12-16 10:48:05.834630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:06.015 [2024-12-16 10:48:05.834658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.015 [2024-12-16 10:48:05.834677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:06.015 [2024-12-16 10:48:05.834685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:06.015 [2024-12-16 10:48:05.834697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.015 [2024-12-16 10:48:05.834705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:06.015 [2024-12-16 10:48:05.834715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:06.015 [2024-12-16 10:48:05.834723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:06.015 [2024-12-16 10:48:05.834741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:06.015 [2024-12-16 10:48:05.834772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:06.015 [2024-12-16 10:48:05.834797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:06.015 [2024-12-16 10:48:05.834827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:06.015 [2024-12-16 10:48:05.834854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:06.015 [2024-12-16 10:48:05.834871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:06.015 [2024-12-16 10:48:05.834881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:06.015 [2024-12-16 10:48:05.834888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.015 [2024-12-16 10:48:05.834899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:06.015 [2024-12-16 10:48:05.834907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:06.015 [2024-12-16 10:48:05.834916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.015 [2024-12-16 10:48:05.834923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:06.015 [2024-12-16 10:48:05.835172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:06.015 [2024-12-16 10:48:05.835195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.015 [2024-12-16 10:48:05.835216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:06.015 [2024-12-16 10:48:05.835236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:06.015 [2024-12-16 10:48:05.835256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.015 [2024-12-16 10:48:05.835330] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:06.016 [2024-12-16 10:48:05.835361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:06.016 [2024-12-16 10:48:05.835381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.016 [2024-12-16 10:48:05.835403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.016 [2024-12-16 10:48:05.835422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:06.016 [2024-12-16 10:48:05.835444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:06.016 [2024-12-16 10:48:05.835571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:06.016 [2024-12-16 10:48:05.835592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:06.016 [2024-12-16 10:48:05.835612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:06.016 [2024-12-16 10:48:05.835635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:06.016 [2024-12-16 10:48:05.835700] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:06.016 [2024-12-16 10:48:05.835740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.016 [2024-12-16 10:48:05.835820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:06.016 [2024-12-16 10:48:05.835853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:06.016 [2024-12-16 10:48:05.835912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:06.016 [2024-12-16 10:48:05.835998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:06.016 [2024-12-16 10:48:05.836033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:06.016 [2024-12-16 10:48:05.836095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:06.016 [2024-12-16 10:48:05.836126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:06.016 [2024-12-16 10:48:05.836201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:06.016 [2024-12-16 10:48:05.836233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:06.016 [2024-12-16 10:48:05.836265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:06.016 [2024-12-16 10:48:05.836321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:06.016 [2024-12-16 10:48:05.836354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:06.016 [2024-12-16 10:48:05.836432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:06.016 [2024-12-16 10:48:05.836442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:06.016 [2024-12-16 10:48:05.836450] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:06.016 [2024-12-16 10:48:05.836461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.016 [2024-12-16 10:48:05.836470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:06.016 [2024-12-16 10:48:05.836480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:06.016 [2024-12-16 10:48:05.836487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:06.016 [2024-12-16 10:48:05.836499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:06.016 [2024-12-16 10:48:05.836509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.016 [2024-12-16 10:48:05.836526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:06.016 [2024-12-16 10:48:05.836537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:16:06.016 [2024-12-16 10:48:05.836548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.016 [2024-12-16 10:48:05.836617] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:06.016 [2024-12-16 10:48:05.836631] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:09.306 [2024-12-16 10:48:08.717721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.306 [2024-12-16 10:48:08.717801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:09.306 [2024-12-16 10:48:08.717818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2881.090 ms 00:16:09.306 [2024-12-16 10:48:08.717831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.306 [2024-12-16 10:48:08.746631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.306 [2024-12-16 10:48:08.746731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:09.306 [2024-12-16 10:48:08.746763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.659 ms 00:16:09.306 [2024-12-16 10:48:08.746797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.306 [2024-12-16 10:48:08.747133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.306 [2024-12-16 10:48:08.747171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:09.306 [2024-12-16 10:48:08.747209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:16:09.306 [2024-12-16 10:48:08.747234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.306 [2024-12-16 10:48:08.759598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.306 [2024-12-16 10:48:08.759653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:09.306 [2024-12-16 10:48:08.759665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.261 ms 00:16:09.306 [2024-12-16 10:48:08.759676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.306 [2024-12-16 10:48:08.759709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.306 [2024-12-16 10:48:08.759720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:09.306 [2024-12-16 10:48:08.759729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:09.306 [2024-12-16 10:48:08.759739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.306 [2024-12-16 10:48:08.760359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.306 [2024-12-16 10:48:08.760394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:09.306 [2024-12-16 10:48:08.760411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:16:09.306 [2024-12-16 10:48:08.760428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.306 [2024-12-16 10:48:08.760557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.306 [2024-12-16 10:48:08.760570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:09.306 [2024-12-16 10:48:08.760580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:16:09.306 [2024-12-16 10:48:08.760591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.306 [2024-12-16 10:48:08.767871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.306 [2024-12-16 10:48:08.768093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:09.306 [2024-12-16 10:48:08.768112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.260 ms 00:16:09.307 [2024-12-16 10:48:08.768122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.778111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:09.307 [2024-12-16 10:48:08.784999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.785040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:09.307 [2024-12-16 10:48:08.785053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.794 ms 00:16:09.307 [2024-12-16 10:48:08.785061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.863282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.863531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:09.307 [2024-12-16 10:48:08.863566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.181 ms 00:16:09.307 [2024-12-16 10:48:08.863576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.863779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.863791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:09.307 [2024-12-16 10:48:08.863807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:16:09.307 [2024-12-16 10:48:08.863815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.869570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.869620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:09.307 [2024-12-16 10:48:08.869635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.728 ms 00:16:09.307 [2024-12-16 10:48:08.869644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.874546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.874595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:09.307 [2024-12-16 10:48:08.874608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.843 ms 00:16:09.307 [2024-12-16 10:48:08.874616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.874991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.875004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:09.307 [2024-12-16 10:48:08.875022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:16:09.307 [2024-12-16 10:48:08.875030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.913696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.913891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:09.307 [2024-12-16 10:48:08.913917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.623 ms 00:16:09.307 [2024-12-16 10:48:08.913947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.920286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.920453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:09.307 [2024-12-16 10:48:08.920481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.264 ms 00:16:09.307 [2024-12-16 10:48:08.920495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.926280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.926327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:09.307 [2024-12-16 10:48:08.926340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.708 ms 00:16:09.307 [2024-12-16 10:48:08.926348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.932462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.932510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:09.307 [2024-12-16 10:48:08.932526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:16:09.307 [2024-12-16 10:48:08.932534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.932590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.932600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:09.307 [2024-12-16 10:48:08.932613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:09.307 [2024-12-16 10:48:08.932621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.932700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.307 [2024-12-16 10:48:08.932709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:09.307 [2024-12-16 10:48:08.932720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:09.307 [2024-12-16 10:48:08.932728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.307 [2024-12-16 10:48:08.934233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3113.948 ms, result 0 00:16:09.307 { 00:16:09.307 "name": "ftl0", 00:16:09.307 "uuid": "f9598845-bf16-4f0d-945b-060b25bf811d" 00:16:09.307 } 00:16:09.307 10:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:09.307 10:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:09.307 10:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:09.307 10:48:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:09.307 [2024-12-16 10:48:09.268157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:09.307 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:09.307 Zero copy mechanism will not be used. 00:16:09.307 Running I/O for 4 seconds... 00:16:11.639 1210.00 IOPS, 80.35 MiB/s [2024-12-16T10:48:12.572Z] 1244.50 IOPS, 82.64 MiB/s [2024-12-16T10:48:13.518Z] 1176.33 IOPS, 78.12 MiB/s [2024-12-16T10:48:13.518Z] 1129.50 IOPS, 75.01 MiB/s 00:16:13.529 Latency(us) 00:16:13.529 [2024-12-16T10:48:13.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.529 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:13.529 ftl0 : 4.00 1129.02 74.97 0.00 0.00 931.12 241.03 3163.37 00:16:13.529 [2024-12-16T10:48:13.518Z] =================================================================================================================== 00:16:13.529 [2024-12-16T10:48:13.518Z] Total : 1129.02 74.97 0.00 0.00 931.12 241.03 3163.37 00:16:13.529 { 00:16:13.529 "results": [ 00:16:13.529 { 00:16:13.529 "job": "ftl0", 00:16:13.529 "core_mask": "0x1", 00:16:13.529 "workload": "randwrite", 00:16:13.529 "status": "finished", 00:16:13.529 "queue_depth": 1, 00:16:13.529 "io_size": 69632, 00:16:13.529 "runtime": 4.002588, 00:16:13.529 "iops": 1129.019524367734, 00:16:13.529 "mibps": 74.97395279004483, 00:16:13.529 "io_failed": 0, 00:16:13.529 "io_timeout": 0, 00:16:13.529 "avg_latency_us": 931.1237734692835, 00:16:13.529 "min_latency_us": 241.03384615384616, 00:16:13.529 "max_latency_us": 3163.372307692308 00:16:13.529 } 00:16:13.529 ], 00:16:13.529 "core_count": 1 00:16:13.529 } 00:16:13.529 [2024-12-16 10:48:13.278059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:13.529 10:48:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:13.529 [2024-12-16 10:48:13.390893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:13.529 Running I/O for 4 seconds... 00:16:15.418 5947.00 IOPS, 23.23 MiB/s [2024-12-16T10:48:16.796Z] 5326.00 IOPS, 20.80 MiB/s [2024-12-16T10:48:17.739Z] 5100.33 IOPS, 19.92 MiB/s [2024-12-16T10:48:17.739Z] 5012.25 IOPS, 19.58 MiB/s 00:16:17.750 Latency(us) 00:16:17.750 [2024-12-16T10:48:17.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.750 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:17.750 ftl0 : 4.03 5004.98 19.55 0.00 0.00 25482.38 322.95 48597.46 00:16:17.750 [2024-12-16T10:48:17.739Z] =================================================================================================================== 00:16:17.750 [2024-12-16T10:48:17.739Z] Total : 5004.98 19.55 0.00 0.00 25482.38 0.00 48597.46 00:16:17.750 [2024-12-16 10:48:17.428281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:17.750 { 00:16:17.750 "results": [ 00:16:17.750 { 00:16:17.750 "job": "ftl0", 00:16:17.750 "core_mask": "0x1", 00:16:17.750 "workload": "randwrite", 00:16:17.750 "status": "finished", 00:16:17.750 "queue_depth": 128, 00:16:17.750 "io_size": 4096, 00:16:17.750 "runtime": 4.029187, 00:16:17.750 "iops": 5004.9799128211225, 00:16:17.750 "mibps": 19.55070278445751, 00:16:17.750 "io_failed": 0, 00:16:17.750 "io_timeout": 0, 00:16:17.750 "avg_latency_us": 25482.382765812985, 00:16:17.750 "min_latency_us": 322.95384615384614, 00:16:17.751 "max_latency_us": 48597.46461538462 00:16:17.751 } 00:16:17.751 ], 00:16:17.751 "core_count": 1 00:16:17.751 } 00:16:17.751 10:48:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:17.751 [2024-12-16 10:48:17.542286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:17.751 Running I/O for 4 seconds... 00:16:19.637 4581.00 IOPS, 17.89 MiB/s [2024-12-16T10:48:20.571Z] 4526.00 IOPS, 17.68 MiB/s [2024-12-16T10:48:21.960Z] 4561.67 IOPS, 17.82 MiB/s [2024-12-16T10:48:21.960Z] 4552.25 IOPS, 17.78 MiB/s 00:16:21.971 Latency(us) 00:16:21.971 [2024-12-16T10:48:21.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.971 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:21.971 Verification LBA range: start 0x0 length 0x1400000 00:16:21.971 ftl0 : 4.01 4568.45 17.85 0.00 0.00 27942.06 389.12 41338.09 00:16:21.971 [2024-12-16T10:48:21.960Z] =================================================================================================================== 00:16:21.971 [2024-12-16T10:48:21.960Z] Total : 4568.45 17.85 0.00 0.00 27942.06 0.00 41338.09 00:16:21.971 [2024-12-16 10:48:21.564514] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:21.971 { 00:16:21.971 "results": [ 00:16:21.971 { 00:16:21.971 "job": "ftl0", 00:16:21.971 "core_mask": "0x1", 00:16:21.971 "workload": "verify", 00:16:21.971 "status": "finished", 00:16:21.971 "verify_range": { 00:16:21.971 "start": 0, 00:16:21.971 "length": 20971520 00:16:21.971 }, 00:16:21.971 "queue_depth": 128, 00:16:21.971 "io_size": 4096, 00:16:21.971 "runtime": 4.013178, 00:16:21.971 "iops": 4568.449243965755, 00:16:21.971 "mibps": 17.84550485924123, 00:16:21.971 "io_failed": 0, 00:16:21.971 "io_timeout": 0, 00:16:21.971 "avg_latency_us": 27942.060094486074, 00:16:21.971 "min_latency_us": 389.12, 00:16:21.971 "max_latency_us": 41338.092307692306 00:16:21.971 } 00:16:21.971 ], 00:16:21.971 "core_count": 1 00:16:21.971 } 00:16:21.971 10:48:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:21.971 [2024-12-16 10:48:21.772992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.971 [2024-12-16 10:48:21.773219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:21.971 [2024-12-16 10:48:21.773415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:21.971 [2024-12-16 10:48:21.773446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.971 [2024-12-16 10:48:21.773497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:21.971 [2024-12-16 10:48:21.774503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.971 [2024-12-16 10:48:21.774691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:21.971 [2024-12-16 10:48:21.774834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:16:21.971 [2024-12-16 10:48:21.774877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.971 [2024-12-16 10:48:21.777625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.971 [2024-12-16 10:48:21.777807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:21.971 [2024-12-16 10:48:21.777966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:16:21.971 [2024-12-16 10:48:21.778005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.233 [2024-12-16 10:48:21.997224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.233 [2024-12-16 10:48:21.997443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:22.233 [2024-12-16 10:48:21.997529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 219.178 ms 00:16:22.233 [2024-12-16 10:48:21.997560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.233 [2024-12-16 10:48:22.003824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.233 [2024-12-16 10:48:22.004028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:22.233 [2024-12-16 10:48:22.004140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.207 ms 00:16:22.233 [2024-12-16 10:48:22.004169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.233 [2024-12-16 10:48:22.006970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.233 [2024-12-16 10:48:22.007157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:22.233 [2024-12-16 10:48:22.007266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:16:22.233 [2024-12-16 10:48:22.007295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.233 [2024-12-16 10:48:22.014564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.233 [2024-12-16 10:48:22.014805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:22.233 [2024-12-16 10:48:22.014830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.209 ms 00:16:22.233 [2024-12-16 10:48:22.014853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.233 [2024-12-16 10:48:22.015052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.233 [2024-12-16 10:48:22.015078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:22.233 [2024-12-16 10:48:22.015089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:22.233 [2024-12-16 10:48:22.015100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.233 [2024-12-16 10:48:22.018394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.233 [2024-12-16 10:48:22.018461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:22.233 [2024-12-16 10:48:22.018473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:16:22.233 [2024-12-16 10:48:22.018483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.233 [2024-12-16 10:48:22.021177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.233 [2024-12-16 10:48:22.021234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:22.233 [2024-12-16 10:48:22.021245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.646 ms 00:16:22.233 [2024-12-16 10:48:22.021256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.234 [2024-12-16 10:48:22.023549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.234 [2024-12-16 10:48:22.023610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:22.234 [2024-12-16 10:48:22.023621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:16:22.234 [2024-12-16 10:48:22.023634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.234 [2024-12-16 10:48:22.025782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.234 [2024-12-16 10:48:22.025841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:22.234 [2024-12-16 10:48:22.025853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.074 ms 00:16:22.234 [2024-12-16 10:48:22.025863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.234 [2024-12-16 10:48:22.025907] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:22.234 [2024-12-16 10:48:22.025954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.025967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.025979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.025988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.025999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:22.234 [2024-12-16 10:48:22.026691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:22.235 [2024-12-16 10:48:22.026898] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:22.235 [2024-12-16 10:48:22.026906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9598845-bf16-4f0d-945b-060b25bf811d 00:16:22.235 [2024-12-16 10:48:22.026916] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:22.235 [2024-12-16 10:48:22.026939] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:22.235 [2024-12-16 10:48:22.026949] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:22.235 [2024-12-16 10:48:22.026957] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:22.235 [2024-12-16 10:48:22.026971] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:22.235 [2024-12-16 10:48:22.026978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:22.235 [2024-12-16 10:48:22.026996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:22.235 [2024-12-16 10:48:22.027003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:22.235 [2024-12-16 10:48:22.027011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:22.235 [2024-12-16 10:48:22.027018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.235 [2024-12-16 10:48:22.027033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:22.235 [2024-12-16 10:48:22.027043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:16:22.235 [2024-12-16 10:48:22.027052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.029485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.235 [2024-12-16 10:48:22.029647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:22.235 [2024-12-16 10:48:22.029664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.393 ms 00:16:22.235 [2024-12-16 10:48:22.029675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.029798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.235 [2024-12-16 10:48:22.029809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:22.235 [2024-12-16 10:48:22.029819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:22.235 [2024-12-16 10:48:22.029831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.037038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.037089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:22.235 [2024-12-16 10:48:22.037099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.037110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.037182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.037193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:22.235 [2024-12-16 10:48:22.037202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.037212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.037293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.037307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:22.235 [2024-12-16 10:48:22.037319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.037329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.037345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.037355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:22.235 [2024-12-16 10:48:22.037363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.037377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.051132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.051197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:22.235 [2024-12-16 10:48:22.051209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.051220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.062251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.062303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:22.235 [2024-12-16 10:48:22.062314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.062335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.062409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.062425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:22.235 [2024-12-16 10:48:22.062439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.062449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.062493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.062508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:22.235 [2024-12-16 10:48:22.062516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.062528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.062599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.062611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:22.235 [2024-12-16 10:48:22.062621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.062631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.062664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.062676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:22.235 [2024-12-16 10:48:22.062685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.062695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.062735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.062751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:22.235 [2024-12-16 10:48:22.062761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.062771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.062823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.235 [2024-12-16 10:48:22.062835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:22.235 [2024-12-16 10:48:22.062843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.235 [2024-12-16 10:48:22.062855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.235 [2024-12-16 10:48:22.063026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 290.026 ms, result 0 00:16:22.235 true 00:16:22.235 10:48:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84657 00:16:22.235 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84657 ']' 00:16:22.235 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84657 00:16:22.235 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:22.235 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.235 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84657 00:16:22.235 killing process with pid 84657 00:16:22.235 Received shutdown signal, test time was about 4.000000 seconds 00:16:22.235 00:16:22.235 Latency(us) 00:16:22.235 [2024-12-16T10:48:22.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.235 [2024-12-16T10:48:22.224Z] =================================================================================================================== 00:16:22.235 [2024-12-16T10:48:22.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:22.236 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.236 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.236 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84657' 00:16:22.236 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84657 00:16:22.236 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84657 00:16:22.498 Remove shared memory files 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:22.498 ************************************ 00:16:22.498 END TEST ftl_bdevperf 00:16:22.498 ************************************ 00:16:22.498 00:16:22.498 real 0m20.579s 00:16:22.498 user 0m23.147s 00:16:22.498 sys 0m0.958s 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.498 10:48:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:22.498 10:48:22 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:22.498 10:48:22 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:22.498 10:48:22 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.498 10:48:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:22.498 ************************************ 00:16:22.498 START TEST ftl_trim 00:16:22.498 ************************************ 00:16:22.498 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:22.760 * Looking for test storage... 00:16:22.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.760 10:48:22 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.760 --rc genhtml_branch_coverage=1 00:16:22.760 --rc genhtml_function_coverage=1 00:16:22.760 --rc genhtml_legend=1 00:16:22.760 --rc geninfo_all_blocks=1 00:16:22.760 --rc geninfo_unexecuted_blocks=1 00:16:22.760 00:16:22.760 ' 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.760 --rc genhtml_branch_coverage=1 00:16:22.760 --rc genhtml_function_coverage=1 00:16:22.760 --rc genhtml_legend=1 00:16:22.760 --rc geninfo_all_blocks=1 00:16:22.760 --rc geninfo_unexecuted_blocks=1 00:16:22.760 00:16:22.760 ' 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.760 --rc genhtml_branch_coverage=1 00:16:22.760 --rc genhtml_function_coverage=1 00:16:22.760 --rc genhtml_legend=1 00:16:22.760 --rc geninfo_all_blocks=1 00:16:22.760 --rc geninfo_unexecuted_blocks=1 00:16:22.760 00:16:22.760 ' 00:16:22.760 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.760 --rc genhtml_branch_coverage=1 00:16:22.760 --rc genhtml_function_coverage=1 00:16:22.760 --rc genhtml_legend=1 00:16:22.760 --rc geninfo_all_blocks=1 00:16:22.760 --rc geninfo_unexecuted_blocks=1 00:16:22.760 00:16:22.760 ' 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.760 10:48:22 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=84992 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:22.761 10:48:22 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 84992 00:16:22.761 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 84992 ']' 00:16:22.761 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.761 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.761 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.761 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.761 10:48:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:22.761 [2024-12-16 10:48:22.716527] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:22.761 [2024-12-16 10:48:22.716966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84992 ] 00:16:23.021 [2024-12-16 10:48:22.854274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:23.021 [2024-12-16 10:48:22.889792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.021 [2024-12-16 10:48:22.890048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.021 [2024-12-16 10:48:22.890167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.593 10:48:23 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.593 10:48:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:23.593 10:48:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:23.593 10:48:23 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:23.593 10:48:23 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:23.593 10:48:23 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:23.593 10:48:23 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:23.593 10:48:23 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:23.854 10:48:23 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:23.854 10:48:23 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:23.854 10:48:23 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:23.854 10:48:23 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:23.854 10:48:23 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:23.854 10:48:23 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:23.854 10:48:23 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:23.854 10:48:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:24.116 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:24.116 { 00:16:24.116 "name": "nvme0n1", 00:16:24.116 "aliases": [ 00:16:24.116 "e7c0fba0-4278-4daf-aac5-60cc3c424fea" 00:16:24.116 ], 00:16:24.116 "product_name": "NVMe disk", 00:16:24.116 "block_size": 4096, 00:16:24.116 "num_blocks": 1310720, 00:16:24.116 "uuid": "e7c0fba0-4278-4daf-aac5-60cc3c424fea", 00:16:24.116 "numa_id": -1, 00:16:24.116 "assigned_rate_limits": { 00:16:24.116 "rw_ios_per_sec": 0, 00:16:24.116 "rw_mbytes_per_sec": 0, 00:16:24.116 "r_mbytes_per_sec": 0, 00:16:24.116 "w_mbytes_per_sec": 0 00:16:24.116 }, 00:16:24.116 "claimed": true, 00:16:24.116 "claim_type": "read_many_write_one", 00:16:24.116 "zoned": false, 00:16:24.116 "supported_io_types": { 00:16:24.116 "read": true, 00:16:24.116 "write": true, 00:16:24.116 "unmap": true, 00:16:24.116 "flush": true, 00:16:24.116 "reset": true, 00:16:24.116 "nvme_admin": true, 00:16:24.116 "nvme_io": true, 00:16:24.116 "nvme_io_md": false, 00:16:24.116 "write_zeroes": true, 00:16:24.116 "zcopy": false, 00:16:24.116 "get_zone_info": false, 00:16:24.116 "zone_management": false, 00:16:24.116 "zone_append": false, 00:16:24.116 "compare": true, 00:16:24.116 "compare_and_write": false, 00:16:24.116 "abort": true, 00:16:24.116 "seek_hole": false, 00:16:24.116 "seek_data": false, 00:16:24.116 "copy": true, 00:16:24.116 "nvme_iov_md": false 00:16:24.116 }, 00:16:24.116 "driver_specific": { 00:16:24.116 "nvme": [ 00:16:24.116 { 00:16:24.116 "pci_address": "0000:00:11.0", 00:16:24.116 "trid": { 00:16:24.116 "trtype": "PCIe", 00:16:24.116 "traddr": "0000:00:11.0" 00:16:24.116 }, 00:16:24.116 "ctrlr_data": { 00:16:24.116 "cntlid": 0, 00:16:24.116 "vendor_id": "0x1b36", 00:16:24.116 "model_number": "QEMU NVMe Ctrl", 00:16:24.116 "serial_number": "12341", 00:16:24.116 "firmware_revision": "8.0.0", 00:16:24.116 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:24.116 "oacs": { 00:16:24.116 "security": 0, 00:16:24.116 "format": 1, 00:16:24.116 "firmware": 0, 00:16:24.116 "ns_manage": 1 00:16:24.116 }, 00:16:24.116 "multi_ctrlr": false, 00:16:24.116 "ana_reporting": false 00:16:24.116 }, 00:16:24.116 "vs": { 00:16:24.116 "nvme_version": "1.4" 00:16:24.116 }, 00:16:24.116 "ns_data": { 00:16:24.116 "id": 1, 00:16:24.116 "can_share": false 00:16:24.116 } 00:16:24.116 } 00:16:24.116 ], 00:16:24.116 "mp_policy": "active_passive" 00:16:24.116 } 00:16:24.116 } 00:16:24.116 ]' 00:16:24.116 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:24.116 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:24.116 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:24.116 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:24.116 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:24.116 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:24.116 10:48:24 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:24.116 10:48:24 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:24.116 10:48:24 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:24.116 10:48:24 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:24.116 10:48:24 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:24.378 10:48:24 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f5536b6c-af4e-4430-8c05-71626cbcabcd 00:16:24.378 10:48:24 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:24.378 10:48:24 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5536b6c-af4e-4430-8c05-71626cbcabcd 00:16:24.638 10:48:24 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:24.932 10:48:24 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=8fd616ef-888e-4e77-9e48-185778e9bce5 00:16:24.932 10:48:24 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8fd616ef-888e-4e77-9e48-185778e9bce5 00:16:25.278 10:48:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.278 10:48:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.278 10:48:24 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:25.278 10:48:24 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:25.278 10:48:24 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.278 10:48:24 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:25.278 10:48:24 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.278 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.278 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:25.278 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:25.278 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:25.278 10:48:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.278 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:25.278 { 00:16:25.278 "name": "2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b", 00:16:25.278 "aliases": [ 00:16:25.278 "lvs/nvme0n1p0" 00:16:25.278 ], 00:16:25.278 "product_name": "Logical Volume", 00:16:25.278 "block_size": 4096, 00:16:25.278 "num_blocks": 26476544, 00:16:25.278 "uuid": "2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b", 00:16:25.278 "assigned_rate_limits": { 00:16:25.278 "rw_ios_per_sec": 0, 00:16:25.278 "rw_mbytes_per_sec": 0, 00:16:25.278 "r_mbytes_per_sec": 0, 00:16:25.278 "w_mbytes_per_sec": 0 00:16:25.278 }, 00:16:25.278 "claimed": false, 00:16:25.278 "zoned": false, 00:16:25.278 "supported_io_types": { 00:16:25.278 "read": true, 00:16:25.278 "write": true, 00:16:25.278 "unmap": true, 00:16:25.278 "flush": false, 00:16:25.278 "reset": true, 00:16:25.278 "nvme_admin": false, 00:16:25.278 "nvme_io": false, 00:16:25.278 "nvme_io_md": false, 00:16:25.278 "write_zeroes": true, 00:16:25.278 "zcopy": false, 00:16:25.278 "get_zone_info": false, 00:16:25.278 "zone_management": false, 00:16:25.278 "zone_append": false, 00:16:25.278 "compare": false, 00:16:25.278 "compare_and_write": false, 00:16:25.278 "abort": false, 00:16:25.278 "seek_hole": true, 00:16:25.278 "seek_data": true, 00:16:25.278 "copy": false, 00:16:25.278 "nvme_iov_md": false 00:16:25.278 }, 00:16:25.278 "driver_specific": { 00:16:25.278 "lvol": { 00:16:25.278 "lvol_store_uuid": "8fd616ef-888e-4e77-9e48-185778e9bce5", 00:16:25.278 "base_bdev": "nvme0n1", 00:16:25.278 "thin_provision": true, 00:16:25.278 "num_allocated_clusters": 0, 00:16:25.278 "snapshot": false, 00:16:25.278 "clone": false, 00:16:25.278 "esnap_clone": false 00:16:25.278 } 00:16:25.278 } 00:16:25.278 } 00:16:25.278 ]' 00:16:25.278 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:25.278 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:25.278 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:25.278 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:25.278 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:25.278 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:25.278 10:48:25 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:25.278 10:48:25 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:25.278 10:48:25 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:25.538 10:48:25 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:25.538 10:48:25 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:25.538 10:48:25 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.538 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.538 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:25.538 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:25.538 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:25.538 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:25.799 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:25.799 { 00:16:25.799 "name": "2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b", 00:16:25.799 "aliases": [ 00:16:25.799 "lvs/nvme0n1p0" 00:16:25.799 ], 00:16:25.799 "product_name": "Logical Volume", 00:16:25.799 "block_size": 4096, 00:16:25.799 "num_blocks": 26476544, 00:16:25.799 "uuid": "2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b", 00:16:25.799 "assigned_rate_limits": { 00:16:25.799 "rw_ios_per_sec": 0, 00:16:25.799 "rw_mbytes_per_sec": 0, 00:16:25.799 "r_mbytes_per_sec": 0, 00:16:25.799 "w_mbytes_per_sec": 0 00:16:25.799 }, 00:16:25.799 "claimed": false, 00:16:25.799 "zoned": false, 00:16:25.799 "supported_io_types": { 00:16:25.799 "read": true, 00:16:25.799 "write": true, 00:16:25.799 "unmap": true, 00:16:25.799 "flush": false, 00:16:25.799 "reset": true, 00:16:25.799 "nvme_admin": false, 00:16:25.799 "nvme_io": false, 00:16:25.799 "nvme_io_md": false, 00:16:25.799 "write_zeroes": true, 00:16:25.799 "zcopy": false, 00:16:25.799 "get_zone_info": false, 00:16:25.799 "zone_management": false, 00:16:25.799 "zone_append": false, 00:16:25.799 "compare": false, 00:16:25.799 "compare_and_write": false, 00:16:25.799 "abort": false, 00:16:25.799 "seek_hole": true, 00:16:25.799 "seek_data": true, 00:16:25.799 "copy": false, 00:16:25.799 "nvme_iov_md": false 00:16:25.799 }, 00:16:25.799 "driver_specific": { 00:16:25.799 "lvol": { 00:16:25.799 "lvol_store_uuid": "8fd616ef-888e-4e77-9e48-185778e9bce5", 00:16:25.799 "base_bdev": "nvme0n1", 00:16:25.799 "thin_provision": true, 00:16:25.799 "num_allocated_clusters": 0, 00:16:25.799 "snapshot": false, 00:16:25.799 "clone": false, 00:16:25.799 "esnap_clone": false 00:16:25.799 } 00:16:25.799 } 00:16:25.799 } 00:16:25.799 ]' 00:16:25.799 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:25.799 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:25.799 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:25.799 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:25.799 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:25.799 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:25.799 10:48:25 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:25.799 10:48:25 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:26.061 10:48:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:26.061 10:48:25 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:26.061 10:48:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:26.061 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:26.061 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:26.061 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:26.061 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:26.061 10:48:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b 00:16:26.322 10:48:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:26.322 { 00:16:26.322 "name": "2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b", 00:16:26.322 "aliases": [ 00:16:26.322 "lvs/nvme0n1p0" 00:16:26.322 ], 00:16:26.322 "product_name": "Logical Volume", 00:16:26.322 "block_size": 4096, 00:16:26.322 "num_blocks": 26476544, 00:16:26.322 "uuid": "2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b", 00:16:26.322 "assigned_rate_limits": { 00:16:26.322 "rw_ios_per_sec": 0, 00:16:26.322 "rw_mbytes_per_sec": 0, 00:16:26.322 "r_mbytes_per_sec": 0, 00:16:26.322 "w_mbytes_per_sec": 0 00:16:26.322 }, 00:16:26.322 "claimed": false, 00:16:26.322 "zoned": false, 00:16:26.322 "supported_io_types": { 00:16:26.322 "read": true, 00:16:26.322 "write": true, 00:16:26.322 "unmap": true, 00:16:26.322 "flush": false, 00:16:26.322 "reset": true, 00:16:26.322 "nvme_admin": false, 00:16:26.322 "nvme_io": false, 00:16:26.322 "nvme_io_md": false, 00:16:26.322 "write_zeroes": true, 00:16:26.322 "zcopy": false, 00:16:26.322 "get_zone_info": false, 00:16:26.322 "zone_management": false, 00:16:26.322 "zone_append": false, 00:16:26.322 "compare": false, 00:16:26.322 "compare_and_write": false, 00:16:26.322 "abort": false, 00:16:26.322 "seek_hole": true, 00:16:26.322 "seek_data": true, 00:16:26.322 "copy": false, 00:16:26.322 "nvme_iov_md": false 00:16:26.322 }, 00:16:26.322 "driver_specific": { 00:16:26.322 "lvol": { 00:16:26.322 "lvol_store_uuid": "8fd616ef-888e-4e77-9e48-185778e9bce5", 00:16:26.322 "base_bdev": "nvme0n1", 00:16:26.322 "thin_provision": true, 00:16:26.322 "num_allocated_clusters": 0, 00:16:26.322 "snapshot": false, 00:16:26.322 "clone": false, 00:16:26.322 "esnap_clone": false 00:16:26.322 } 00:16:26.322 } 00:16:26.322 } 00:16:26.322 ]' 00:16:26.322 10:48:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:26.322 10:48:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:26.322 10:48:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:26.322 10:48:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:26.322 10:48:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:26.322 10:48:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:26.322 10:48:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:26.322 10:48:26 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:26.583 [2024-12-16 10:48:26.371109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.583 [2024-12-16 10:48:26.371268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:26.583 [2024-12-16 10:48:26.371297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:26.583 [2024-12-16 10:48:26.371308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.583 [2024-12-16 10:48:26.373694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.583 [2024-12-16 10:48:26.373727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:26.583 [2024-12-16 10:48:26.373736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.357 ms 00:16:26.583 [2024-12-16 10:48:26.373747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.583 [2024-12-16 10:48:26.373841] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:26.583 [2024-12-16 10:48:26.374094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:26.584 [2024-12-16 10:48:26.374112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.374123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:26.584 [2024-12-16 10:48:26.374132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:26.584 [2024-12-16 10:48:26.374140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.374232] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f 00:16:26.584 [2024-12-16 10:48:26.375331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.375361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:26.584 [2024-12-16 10:48:26.375372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:26.584 [2024-12-16 10:48:26.375380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.380497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.380524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:26.584 [2024-12-16 10:48:26.380535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.047 ms 00:16:26.584 [2024-12-16 10:48:26.380553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.380669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.380688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:26.584 [2024-12-16 10:48:26.380699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:26.584 [2024-12-16 10:48:26.380714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.380748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.380757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:26.584 [2024-12-16 10:48:26.380767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:26.584 [2024-12-16 10:48:26.380796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.380837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:26.584 [2024-12-16 10:48:26.382254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.382364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:26.584 [2024-12-16 10:48:26.382377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:16:26.584 [2024-12-16 10:48:26.382386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.382440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.382450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:26.584 [2024-12-16 10:48:26.382458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:26.584 [2024-12-16 10:48:26.382468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.382494] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:26.584 [2024-12-16 10:48:26.382647] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:26.584 [2024-12-16 10:48:26.382658] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:26.584 [2024-12-16 10:48:26.382670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:26.584 [2024-12-16 10:48:26.382690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:26.584 [2024-12-16 10:48:26.382701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:26.584 [2024-12-16 10:48:26.382709] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:26.584 [2024-12-16 10:48:26.382718] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:26.584 [2024-12-16 10:48:26.382726] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:26.584 [2024-12-16 10:48:26.382734] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:26.584 [2024-12-16 10:48:26.382741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.382750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:26.584 [2024-12-16 10:48:26.382759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:16:26.584 [2024-12-16 10:48:26.382768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.382858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.584 [2024-12-16 10:48:26.382870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:26.584 [2024-12-16 10:48:26.382877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:26.584 [2024-12-16 10:48:26.382886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.584 [2024-12-16 10:48:26.383018] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:26.584 [2024-12-16 10:48:26.383030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:26.584 [2024-12-16 10:48:26.383039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:26.584 [2024-12-16 10:48:26.383068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:26.584 [2024-12-16 10:48:26.383099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.584 [2024-12-16 10:48:26.383115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:26.584 [2024-12-16 10:48:26.383124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:26.584 [2024-12-16 10:48:26.383132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.584 [2024-12-16 10:48:26.383143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:26.584 [2024-12-16 10:48:26.383150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:26.584 [2024-12-16 10:48:26.383159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:26.584 [2024-12-16 10:48:26.383175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:26.584 [2024-12-16 10:48:26.383200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:26.584 [2024-12-16 10:48:26.383225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:26.584 [2024-12-16 10:48:26.383249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:26.584 [2024-12-16 10:48:26.383287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:26.584 [2024-12-16 10:48:26.383312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.584 [2024-12-16 10:48:26.383328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:26.584 [2024-12-16 10:48:26.383337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:26.584 [2024-12-16 10:48:26.383345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.584 [2024-12-16 10:48:26.383353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:26.584 [2024-12-16 10:48:26.383361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:26.584 [2024-12-16 10:48:26.383370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:26.584 [2024-12-16 10:48:26.383386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:26.584 [2024-12-16 10:48:26.383393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383402] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:26.584 [2024-12-16 10:48:26.383410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:26.584 [2024-12-16 10:48:26.383422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.584 [2024-12-16 10:48:26.383438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:26.584 [2024-12-16 10:48:26.383445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:26.584 [2024-12-16 10:48:26.383454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:26.584 [2024-12-16 10:48:26.383465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:26.584 [2024-12-16 10:48:26.383473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:26.584 [2024-12-16 10:48:26.383480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:26.584 [2024-12-16 10:48:26.383491] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:26.584 [2024-12-16 10:48:26.383504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.584 [2024-12-16 10:48:26.383514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:26.584 [2024-12-16 10:48:26.383521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:26.584 [2024-12-16 10:48:26.383531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:26.585 [2024-12-16 10:48:26.383538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:26.585 [2024-12-16 10:48:26.383547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:26.585 [2024-12-16 10:48:26.383554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:26.585 [2024-12-16 10:48:26.383564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:26.585 [2024-12-16 10:48:26.383571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:26.585 [2024-12-16 10:48:26.383580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:26.585 [2024-12-16 10:48:26.383587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:26.585 [2024-12-16 10:48:26.383595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:26.585 [2024-12-16 10:48:26.383602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:26.585 [2024-12-16 10:48:26.383611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:26.585 [2024-12-16 10:48:26.383618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:26.585 [2024-12-16 10:48:26.383626] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:26.585 [2024-12-16 10:48:26.383634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.585 [2024-12-16 10:48:26.383643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:26.585 [2024-12-16 10:48:26.383650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:26.585 [2024-12-16 10:48:26.383659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:26.585 [2024-12-16 10:48:26.383666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:26.585 [2024-12-16 10:48:26.383675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.585 [2024-12-16 10:48:26.383682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:26.585 [2024-12-16 10:48:26.383695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:16:26.585 [2024-12-16 10:48:26.383701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.585 [2024-12-16 10:48:26.383773] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:26.585 [2024-12-16 10:48:26.383783] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:29.113 [2024-12-16 10:48:28.673256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.113 [2024-12-16 10:48:28.673310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:29.113 [2024-12-16 10:48:28.673325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2289.470 ms 00:16:29.113 [2024-12-16 10:48:28.673334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.113 [2024-12-16 10:48:28.691955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.113 [2024-12-16 10:48:28.692030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:29.113 [2024-12-16 10:48:28.692060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.499 ms 00:16:29.113 [2024-12-16 10:48:28.692076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.113 [2024-12-16 10:48:28.692346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.113 [2024-12-16 10:48:28.692373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:29.113 [2024-12-16 10:48:28.692394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:16:29.113 [2024-12-16 10:48:28.692408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.113 [2024-12-16 10:48:28.702463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.113 [2024-12-16 10:48:28.702501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:29.113 [2024-12-16 10:48:28.702514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.006 ms 00:16:29.113 [2024-12-16 10:48:28.702522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.113 [2024-12-16 10:48:28.702587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.702596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:29.114 [2024-12-16 10:48:28.702606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:29.114 [2024-12-16 10:48:28.702614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.702922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.702958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:29.114 [2024-12-16 10:48:28.702968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:29.114 [2024-12-16 10:48:28.702976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.703103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.703113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:29.114 [2024-12-16 10:48:28.703124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:16:29.114 [2024-12-16 10:48:28.703132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.708473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.708513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:29.114 [2024-12-16 10:48:28.708525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.309 ms 00:16:29.114 [2024-12-16 10:48:28.708532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.716737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:29.114 [2024-12-16 10:48:28.730761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.730797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:29.114 [2024-12-16 10:48:28.730808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.137 ms 00:16:29.114 [2024-12-16 10:48:28.730817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.780806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.780862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:29.114 [2024-12-16 10:48:28.780875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.912 ms 00:16:29.114 [2024-12-16 10:48:28.780896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.781118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.781134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:29.114 [2024-12-16 10:48:28.781143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:16:29.114 [2024-12-16 10:48:28.781152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.784199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.784356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:29.114 [2024-12-16 10:48:28.784371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.014 ms 00:16:29.114 [2024-12-16 10:48:28.784391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.787271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.787304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:29.114 [2024-12-16 10:48:28.787315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:16:29.114 [2024-12-16 10:48:28.787325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.787643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.787660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:29.114 [2024-12-16 10:48:28.787671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:16:29.114 [2024-12-16 10:48:28.787681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.813125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.813162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:29.114 [2024-12-16 10:48:28.813172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.415 ms 00:16:29.114 [2024-12-16 10:48:28.813181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.816916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.816965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:29.114 [2024-12-16 10:48:28.816978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:16:29.114 [2024-12-16 10:48:28.816988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.820059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.820092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:29.114 [2024-12-16 10:48:28.820102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:16:29.114 [2024-12-16 10:48:28.820112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.823390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.823424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:29.114 [2024-12-16 10:48:28.823435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:16:29.114 [2024-12-16 10:48:28.823446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.823493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.823505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:29.114 [2024-12-16 10:48:28.823517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:29.114 [2024-12-16 10:48:28.823527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.823596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.114 [2024-12-16 10:48:28.823606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:29.114 [2024-12-16 10:48:28.823614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:29.114 [2024-12-16 10:48:28.823624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.114 [2024-12-16 10:48:28.824545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:29.114 [2024-12-16 10:48:28.825624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2453.163 ms, result 0 00:16:29.114 [2024-12-16 10:48:28.826219] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:29.114 { 00:16:29.114 "name": "ftl0", 00:16:29.114 "uuid": "37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f" 00:16:29.114 } 00:16:29.114 10:48:28 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:29.114 10:48:28 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:29.114 10:48:28 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:29.114 10:48:28 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:29.114 10:48:28 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:29.114 10:48:28 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:29.114 10:48:28 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:29.114 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:29.372 [ 00:16:29.372 { 00:16:29.372 "name": "ftl0", 00:16:29.372 "aliases": [ 00:16:29.372 "37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f" 00:16:29.372 ], 00:16:29.372 "product_name": "FTL disk", 00:16:29.372 "block_size": 4096, 00:16:29.372 "num_blocks": 23592960, 00:16:29.372 "uuid": "37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f", 00:16:29.372 "assigned_rate_limits": { 00:16:29.372 "rw_ios_per_sec": 0, 00:16:29.372 "rw_mbytes_per_sec": 0, 00:16:29.372 "r_mbytes_per_sec": 0, 00:16:29.372 "w_mbytes_per_sec": 0 00:16:29.372 }, 00:16:29.372 "claimed": false, 00:16:29.372 "zoned": false, 00:16:29.372 "supported_io_types": { 00:16:29.372 "read": true, 00:16:29.372 "write": true, 00:16:29.372 "unmap": true, 00:16:29.372 "flush": true, 00:16:29.372 "reset": false, 00:16:29.372 "nvme_admin": false, 00:16:29.372 "nvme_io": false, 00:16:29.372 "nvme_io_md": false, 00:16:29.372 "write_zeroes": true, 00:16:29.372 "zcopy": false, 00:16:29.372 "get_zone_info": false, 00:16:29.372 "zone_management": false, 00:16:29.372 "zone_append": false, 00:16:29.372 "compare": false, 00:16:29.372 "compare_and_write": false, 00:16:29.372 "abort": false, 00:16:29.372 "seek_hole": false, 00:16:29.372 "seek_data": false, 00:16:29.372 "copy": false, 00:16:29.372 "nvme_iov_md": false 00:16:29.372 }, 00:16:29.372 "driver_specific": { 00:16:29.372 "ftl": { 00:16:29.372 "base_bdev": "2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b", 00:16:29.372 "cache": "nvc0n1p0" 00:16:29.372 } 00:16:29.372 } 00:16:29.372 } 00:16:29.372 ] 00:16:29.372 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:29.372 10:48:29 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:29.372 10:48:29 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:29.631 10:48:29 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:29.631 10:48:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:29.890 10:48:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:29.890 { 00:16:29.890 "name": "ftl0", 00:16:29.890 "aliases": [ 00:16:29.890 "37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f" 00:16:29.890 ], 00:16:29.890 "product_name": "FTL disk", 00:16:29.890 "block_size": 4096, 00:16:29.890 "num_blocks": 23592960, 00:16:29.890 "uuid": "37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f", 00:16:29.890 "assigned_rate_limits": { 00:16:29.890 "rw_ios_per_sec": 0, 00:16:29.890 "rw_mbytes_per_sec": 0, 00:16:29.890 "r_mbytes_per_sec": 0, 00:16:29.890 "w_mbytes_per_sec": 0 00:16:29.890 }, 00:16:29.890 "claimed": false, 00:16:29.890 "zoned": false, 00:16:29.890 "supported_io_types": { 00:16:29.890 "read": true, 00:16:29.890 "write": true, 00:16:29.890 "unmap": true, 00:16:29.890 "flush": true, 00:16:29.890 "reset": false, 00:16:29.890 "nvme_admin": false, 00:16:29.890 "nvme_io": false, 00:16:29.890 "nvme_io_md": false, 00:16:29.890 "write_zeroes": true, 00:16:29.890 "zcopy": false, 00:16:29.890 "get_zone_info": false, 00:16:29.890 "zone_management": false, 00:16:29.890 "zone_append": false, 00:16:29.890 "compare": false, 00:16:29.890 "compare_and_write": false, 00:16:29.890 "abort": false, 00:16:29.890 "seek_hole": false, 00:16:29.890 "seek_data": false, 00:16:29.890 "copy": false, 00:16:29.890 "nvme_iov_md": false 00:16:29.890 }, 00:16:29.890 "driver_specific": { 00:16:29.890 "ftl": { 00:16:29.890 "base_bdev": "2d99cb22-eb8d-4139-ac54-0dfbf1c0a61b", 00:16:29.890 "cache": "nvc0n1p0" 00:16:29.890 } 00:16:29.890 } 00:16:29.890 } 00:16:29.890 ]' 00:16:29.890 10:48:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:29.890 10:48:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:29.891 10:48:29 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:29.891 [2024-12-16 10:48:29.849579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.849620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:29.891 [2024-12-16 10:48:29.849634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:29.891 [2024-12-16 10:48:29.849642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.849676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:29.891 [2024-12-16 10:48:29.850130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.850173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:29.891 [2024-12-16 10:48:29.850182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:16:29.891 [2024-12-16 10:48:29.850191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.850671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.850692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:29.891 [2024-12-16 10:48:29.850704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:16:29.891 [2024-12-16 10:48:29.850714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.854370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.854391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:29.891 [2024-12-16 10:48:29.854410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:16:29.891 [2024-12-16 10:48:29.854420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.861331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.861450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:29.891 [2024-12-16 10:48:29.861475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.861 ms 00:16:29.891 [2024-12-16 10:48:29.861491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.862823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.862861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:29.891 [2024-12-16 10:48:29.862870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:16:29.891 [2024-12-16 10:48:29.862879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.866900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.866948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:29.891 [2024-12-16 10:48:29.866959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.981 ms 00:16:29.891 [2024-12-16 10:48:29.866968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.867135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.867163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:29.891 [2024-12-16 10:48:29.867173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:29.891 [2024-12-16 10:48:29.867182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.868708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.868830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:29.891 [2024-12-16 10:48:29.868844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:16:29.891 [2024-12-16 10:48:29.868855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.869996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.870024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:29.891 [2024-12-16 10:48:29.870033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:16:29.891 [2024-12-16 10:48:29.870042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.871217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.871254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:29.891 [2024-12-16 10:48:29.871264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:16:29.891 [2024-12-16 10:48:29.871273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.872148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.891 [2024-12-16 10:48:29.872182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:29.891 [2024-12-16 10:48:29.872190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:16:29.891 [2024-12-16 10:48:29.872199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.891 [2024-12-16 10:48:29.872238] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:29.891 [2024-12-16 10:48:29.872253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:29.891 [2024-12-16 10:48:29.872637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.872996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:29.892 [2024-12-16 10:48:29.873134] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:29.892 [2024-12-16 10:48:29.873141] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f 00:16:29.892 [2024-12-16 10:48:29.873151] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:29.892 [2024-12-16 10:48:29.873157] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:29.892 [2024-12-16 10:48:29.873166] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:29.892 [2024-12-16 10:48:29.873173] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:29.892 [2024-12-16 10:48:29.873182] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:29.892 [2024-12-16 10:48:29.873191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:29.892 [2024-12-16 10:48:29.873199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:29.892 [2024-12-16 10:48:29.873205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:29.892 [2024-12-16 10:48:29.873213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:29.892 [2024-12-16 10:48:29.873220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.892 [2024-12-16 10:48:29.873229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:29.892 [2024-12-16 10:48:29.873237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:16:29.892 [2024-12-16 10:48:29.873247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.892 [2024-12-16 10:48:29.874755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.892 [2024-12-16 10:48:29.874775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:29.892 [2024-12-16 10:48:29.874784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:16:29.892 [2024-12-16 10:48:29.874795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.892 [2024-12-16 10:48:29.874903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.892 [2024-12-16 10:48:29.874914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:29.892 [2024-12-16 10:48:29.874923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:29.892 [2024-12-16 10:48:29.874944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.880069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.880102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:30.151 [2024-12-16 10:48:29.880114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.880122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.880208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.880219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:30.151 [2024-12-16 10:48:29.880228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.880238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.880300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.880311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:30.151 [2024-12-16 10:48:29.880319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.880330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.880356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.880365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:30.151 [2024-12-16 10:48:29.880373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.880382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.889394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.889436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:30.151 [2024-12-16 10:48:29.889448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.889457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.896971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.897008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:30.151 [2024-12-16 10:48:29.897017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.897029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.897094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.897106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:30.151 [2024-12-16 10:48:29.897114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.897123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.897171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.897181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:30.151 [2024-12-16 10:48:29.897188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.897197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.897271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.897282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:30.151 [2024-12-16 10:48:29.897290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.897299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.897346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.897359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:30.151 [2024-12-16 10:48:29.897367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.897377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.897415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.897425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:30.151 [2024-12-16 10:48:29.897432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.897441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.897498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.151 [2024-12-16 10:48:29.897509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:30.151 [2024-12-16 10:48:29.897517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.151 [2024-12-16 10:48:29.897536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.151 [2024-12-16 10:48:29.897696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.103 ms, result 0 00:16:30.151 true 00:16:30.151 10:48:29 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 84992 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 84992 ']' 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 84992 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84992 00:16:30.151 killing process with pid 84992 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84992' 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 84992 00:16:30.151 10:48:29 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 84992 00:16:35.415 10:48:34 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:35.981 65536+0 records in 00:16:35.981 65536+0 records out 00:16:35.981 268435456 bytes (268 MB, 256 MiB) copied, 0.800269 s, 335 MB/s 00:16:35.981 10:48:35 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:35.981 [2024-12-16 10:48:35.766745] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:35.981 [2024-12-16 10:48:35.766986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85153 ] 00:16:35.981 [2024-12-16 10:48:35.902258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.981 [2024-12-16 10:48:35.935509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.243 [2024-12-16 10:48:36.021942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:36.243 [2024-12-16 10:48:36.022001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:36.243 [2024-12-16 10:48:36.179432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.243 [2024-12-16 10:48:36.179480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:36.243 [2024-12-16 10:48:36.179493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:36.243 [2024-12-16 10:48:36.179501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.243 [2024-12-16 10:48:36.181815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.243 [2024-12-16 10:48:36.181857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:36.243 [2024-12-16 10:48:36.181872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:16:36.243 [2024-12-16 10:48:36.181880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.243 [2024-12-16 10:48:36.181965] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:36.243 [2024-12-16 10:48:36.182469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:36.243 [2024-12-16 10:48:36.182508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.243 [2024-12-16 10:48:36.182519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:36.243 [2024-12-16 10:48:36.182530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:16:36.243 [2024-12-16 10:48:36.182538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.243 [2024-12-16 10:48:36.183980] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:36.243 [2024-12-16 10:48:36.186797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.243 [2024-12-16 10:48:36.186834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:36.243 [2024-12-16 10:48:36.186850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:16:36.243 [2024-12-16 10:48:36.186865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.243 [2024-12-16 10:48:36.186949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.243 [2024-12-16 10:48:36.186964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:36.243 [2024-12-16 10:48:36.186973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:36.243 [2024-12-16 10:48:36.186982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.243 [2024-12-16 10:48:36.191813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.243 [2024-12-16 10:48:36.191842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:36.243 [2024-12-16 10:48:36.191852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:16:36.243 [2024-12-16 10:48:36.191860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.243 [2024-12-16 10:48:36.191982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.243 [2024-12-16 10:48:36.191995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:36.243 [2024-12-16 10:48:36.192004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:36.243 [2024-12-16 10:48:36.192014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.243 [2024-12-16 10:48:36.192041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.244 [2024-12-16 10:48:36.192049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:36.244 [2024-12-16 10:48:36.192060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:36.244 [2024-12-16 10:48:36.192067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.244 [2024-12-16 10:48:36.192093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:36.244 [2024-12-16 10:48:36.193421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.244 [2024-12-16 10:48:36.193448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:36.244 [2024-12-16 10:48:36.193458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:16:36.244 [2024-12-16 10:48:36.193464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.244 [2024-12-16 10:48:36.193506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.244 [2024-12-16 10:48:36.193518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:36.244 [2024-12-16 10:48:36.193531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:36.244 [2024-12-16 10:48:36.193538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.244 [2024-12-16 10:48:36.193554] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:36.244 [2024-12-16 10:48:36.193570] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:36.244 [2024-12-16 10:48:36.193610] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:36.244 [2024-12-16 10:48:36.193628] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:36.244 [2024-12-16 10:48:36.193731] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:36.244 [2024-12-16 10:48:36.193741] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:36.244 [2024-12-16 10:48:36.193751] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:36.244 [2024-12-16 10:48:36.193761] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:36.244 [2024-12-16 10:48:36.193773] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:36.244 [2024-12-16 10:48:36.193783] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:36.244 [2024-12-16 10:48:36.193791] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:36.244 [2024-12-16 10:48:36.193798] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:36.244 [2024-12-16 10:48:36.193805] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:36.244 [2024-12-16 10:48:36.193812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.244 [2024-12-16 10:48:36.193821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:36.244 [2024-12-16 10:48:36.193830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:36.244 [2024-12-16 10:48:36.193840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.244 [2024-12-16 10:48:36.193938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.244 [2024-12-16 10:48:36.193947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:36.244 [2024-12-16 10:48:36.193955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:16:36.244 [2024-12-16 10:48:36.193962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.244 [2024-12-16 10:48:36.194077] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:36.244 [2024-12-16 10:48:36.194088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:36.244 [2024-12-16 10:48:36.194097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:36.244 [2024-12-16 10:48:36.194132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:36.244 [2024-12-16 10:48:36.194158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.244 [2024-12-16 10:48:36.194173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:36.244 [2024-12-16 10:48:36.194181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:36.244 [2024-12-16 10:48:36.194188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.244 [2024-12-16 10:48:36.194196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:36.244 [2024-12-16 10:48:36.194203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:36.244 [2024-12-16 10:48:36.194210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:36.244 [2024-12-16 10:48:36.194225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:36.244 [2024-12-16 10:48:36.194250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:36.244 [2024-12-16 10:48:36.194272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:36.244 [2024-12-16 10:48:36.194298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:36.244 [2024-12-16 10:48:36.194320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:36.244 [2024-12-16 10:48:36.194342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.244 [2024-12-16 10:48:36.194357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:36.244 [2024-12-16 10:48:36.194364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:36.244 [2024-12-16 10:48:36.194371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.244 [2024-12-16 10:48:36.194378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:36.244 [2024-12-16 10:48:36.194385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:36.244 [2024-12-16 10:48:36.194393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:36.244 [2024-12-16 10:48:36.194410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:36.244 [2024-12-16 10:48:36.194418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194425] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:36.244 [2024-12-16 10:48:36.194433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:36.244 [2024-12-16 10:48:36.194441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.244 [2024-12-16 10:48:36.194457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:36.244 [2024-12-16 10:48:36.194465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:36.244 [2024-12-16 10:48:36.194473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:36.244 [2024-12-16 10:48:36.194480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:36.244 [2024-12-16 10:48:36.194488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:36.244 [2024-12-16 10:48:36.194495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:36.244 [2024-12-16 10:48:36.194505] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:36.244 [2024-12-16 10:48:36.194518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.244 [2024-12-16 10:48:36.194527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:36.244 [2024-12-16 10:48:36.194538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:36.244 [2024-12-16 10:48:36.194546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:36.244 [2024-12-16 10:48:36.194554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:36.244 [2024-12-16 10:48:36.194562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:36.244 [2024-12-16 10:48:36.194570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:36.244 [2024-12-16 10:48:36.194578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:36.244 [2024-12-16 10:48:36.194591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:36.244 [2024-12-16 10:48:36.194599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:36.244 [2024-12-16 10:48:36.194606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:36.244 [2024-12-16 10:48:36.194614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:36.244 [2024-12-16 10:48:36.194622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:36.244 [2024-12-16 10:48:36.194630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:36.244 [2024-12-16 10:48:36.194639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:36.245 [2024-12-16 10:48:36.194646] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:36.245 [2024-12-16 10:48:36.194656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.245 [2024-12-16 10:48:36.194665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:36.245 [2024-12-16 10:48:36.194675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:36.245 [2024-12-16 10:48:36.194683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:36.245 [2024-12-16 10:48:36.194691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:36.245 [2024-12-16 10:48:36.194699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.194707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:36.245 [2024-12-16 10:48:36.194717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:16:36.245 [2024-12-16 10:48:36.194724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.245 [2024-12-16 10:48:36.211030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.211069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:36.245 [2024-12-16 10:48:36.211082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.256 ms 00:16:36.245 [2024-12-16 10:48:36.211091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.245 [2024-12-16 10:48:36.211222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.211234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:36.245 [2024-12-16 10:48:36.211243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:36.245 [2024-12-16 10:48:36.211255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.245 [2024-12-16 10:48:36.219110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.219242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:36.245 [2024-12-16 10:48:36.219259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.833 ms 00:16:36.245 [2024-12-16 10:48:36.219269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.245 [2024-12-16 10:48:36.219315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.219329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:36.245 [2024-12-16 10:48:36.219341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:36.245 [2024-12-16 10:48:36.219349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.245 [2024-12-16 10:48:36.219670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.219694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:36.245 [2024-12-16 10:48:36.219710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:16:36.245 [2024-12-16 10:48:36.219719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.245 [2024-12-16 10:48:36.219850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.219864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:36.245 [2024-12-16 10:48:36.219873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:16:36.245 [2024-12-16 10:48:36.219887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.245 [2024-12-16 10:48:36.224772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.224808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:36.245 [2024-12-16 10:48:36.224823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.860 ms 00:16:36.245 [2024-12-16 10:48:36.224831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.245 [2024-12-16 10:48:36.227659] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:36.245 [2024-12-16 10:48:36.227695] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:36.245 [2024-12-16 10:48:36.227712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.245 [2024-12-16 10:48:36.227721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:36.245 [2024-12-16 10:48:36.227731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:16:36.245 [2024-12-16 10:48:36.227739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.504 [2024-12-16 10:48:36.242414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.504 [2024-12-16 10:48:36.242545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:36.504 [2024-12-16 10:48:36.242570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.631 ms 00:16:36.504 [2024-12-16 10:48:36.242579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.244632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.244664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:36.505 [2024-12-16 10:48:36.244673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.981 ms 00:16:36.505 [2024-12-16 10:48:36.244681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.246471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.246500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:36.505 [2024-12-16 10:48:36.246509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:16:36.505 [2024-12-16 10:48:36.246522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.246831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.246841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:36.505 [2024-12-16 10:48:36.246849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:16:36.505 [2024-12-16 10:48:36.246856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.262140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.262190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:36.505 [2024-12-16 10:48:36.262201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.247 ms 00:16:36.505 [2024-12-16 10:48:36.262209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.269638] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:36.505 [2024-12-16 10:48:36.284026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.284061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:36.505 [2024-12-16 10:48:36.284072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.749 ms 00:16:36.505 [2024-12-16 10:48:36.284086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.284173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.284187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:36.505 [2024-12-16 10:48:36.284200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:36.505 [2024-12-16 10:48:36.284207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.284257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.284272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:36.505 [2024-12-16 10:48:36.284280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:36.505 [2024-12-16 10:48:36.284287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.284308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.284315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:36.505 [2024-12-16 10:48:36.284323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:36.505 [2024-12-16 10:48:36.284333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.284365] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:36.505 [2024-12-16 10:48:36.284378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.284385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:36.505 [2024-12-16 10:48:36.284394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:36.505 [2024-12-16 10:48:36.284401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.288657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.288696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:36.505 [2024-12-16 10:48:36.288706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.236 ms 00:16:36.505 [2024-12-16 10:48:36.288714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.288814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.505 [2024-12-16 10:48:36.288824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:36.505 [2024-12-16 10:48:36.288833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:36.505 [2024-12-16 10:48:36.288843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.505 [2024-12-16 10:48:36.289606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:36.505 [2024-12-16 10:48:36.290632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.911 ms, result 0 00:16:36.505 [2024-12-16 10:48:36.291731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:36.505 [2024-12-16 10:48:36.301172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:37.448  [2024-12-16T10:48:38.375Z] Copying: 20/256 [MB] (20 MBps) [2024-12-16T10:48:39.314Z] Copying: 49/256 [MB] (29 MBps) [2024-12-16T10:48:40.699Z] Copying: 78/256 [MB] (28 MBps) [2024-12-16T10:48:41.636Z] Copying: 101/256 [MB] (22 MBps) [2024-12-16T10:48:42.578Z] Copying: 124/256 [MB] (23 MBps) [2024-12-16T10:48:43.534Z] Copying: 155/256 [MB] (30 MBps) [2024-12-16T10:48:44.478Z] Copying: 176/256 [MB] (20 MBps) [2024-12-16T10:48:45.422Z] Copying: 198/256 [MB] (22 MBps) [2024-12-16T10:48:46.364Z] Copying: 220/256 [MB] (22 MBps) [2024-12-16T10:48:47.310Z] Copying: 242/256 [MB] (21 MBps) [2024-12-16T10:48:47.310Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-16 10:48:47.137974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:47.321 [2024-12-16 10:48:47.140149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.140359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:47.321 [2024-12-16 10:48:47.140501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:47.321 [2024-12-16 10:48:47.140549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.140680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:47.321 [2024-12-16 10:48:47.141551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.141743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:47.321 [2024-12-16 10:48:47.141870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:16:47.321 [2024-12-16 10:48:47.141916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.144841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.145047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:47.321 [2024-12-16 10:48:47.145137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:16:47.321 [2024-12-16 10:48:47.145189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.152867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.153098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:47.321 [2024-12-16 10:48:47.153196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.575 ms 00:16:47.321 [2024-12-16 10:48:47.153288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.160307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.160484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:47.321 [2024-12-16 10:48:47.160573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.941 ms 00:16:47.321 [2024-12-16 10:48:47.160602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.163539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.163596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:47.321 [2024-12-16 10:48:47.163607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:16:47.321 [2024-12-16 10:48:47.163615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.168275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.168343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:47.321 [2024-12-16 10:48:47.168354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.608 ms 00:16:47.321 [2024-12-16 10:48:47.168366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.168505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.168515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:47.321 [2024-12-16 10:48:47.168525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:47.321 [2024-12-16 10:48:47.168532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.171893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.171966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:47.321 [2024-12-16 10:48:47.171978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:16:47.321 [2024-12-16 10:48:47.171986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.175068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.175123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:47.321 [2024-12-16 10:48:47.175134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:16:47.321 [2024-12-16 10:48:47.175142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.177530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.177587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:47.321 [2024-12-16 10:48:47.177597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:16:47.321 [2024-12-16 10:48:47.177606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.180080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-12-16 10:48:47.180133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:47.321 [2024-12-16 10:48:47.180145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:16:47.321 [2024-12-16 10:48:47.180152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.321 [2024-12-16 10:48:47.180201] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:47.321 [2024-12-16 10:48:47.180219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:47.321 [2024-12-16 10:48:47.180239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:47.321 [2024-12-16 10:48:47.180248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:47.321 [2024-12-16 10:48:47.180258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.180993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.181001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.181008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.181016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.181023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:47.322 [2024-12-16 10:48:47.181041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:47.323 [2024-12-16 10:48:47.181049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:47.323 [2024-12-16 10:48:47.181057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:47.323 [2024-12-16 10:48:47.181064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:47.323 [2024-12-16 10:48:47.181072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:47.323 [2024-12-16 10:48:47.181088] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:47.323 [2024-12-16 10:48:47.181096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f 00:16:47.323 [2024-12-16 10:48:47.181105] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:47.323 [2024-12-16 10:48:47.181112] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:47.323 [2024-12-16 10:48:47.181120] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:47.323 [2024-12-16 10:48:47.181128] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:47.323 [2024-12-16 10:48:47.181141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:47.323 [2024-12-16 10:48:47.181149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:47.323 [2024-12-16 10:48:47.181156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:47.323 [2024-12-16 10:48:47.181163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:47.323 [2024-12-16 10:48:47.181169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:47.323 [2024-12-16 10:48:47.181176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.323 [2024-12-16 10:48:47.181184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:47.323 [2024-12-16 10:48:47.181193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:16:47.323 [2024-12-16 10:48:47.181204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.184049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.323 [2024-12-16 10:48:47.184221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:47.323 [2024-12-16 10:48:47.184295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:16:47.323 [2024-12-16 10:48:47.184325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.184548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.323 [2024-12-16 10:48:47.184658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:47.323 [2024-12-16 10:48:47.184798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:16:47.323 [2024-12-16 10:48:47.184834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.192513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.192700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.323 [2024-12-16 10:48:47.192801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.192836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.193062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.193170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.323 [2024-12-16 10:48:47.193251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.193360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.193466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.193518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.323 [2024-12-16 10:48:47.193582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.193615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.193696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.193763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.323 [2024-12-16 10:48:47.193829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.193872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.207964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.208148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.323 [2024-12-16 10:48:47.208220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.208252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.218596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.218792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.323 [2024-12-16 10:48:47.218827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.218840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.218901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.218911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:47.323 [2024-12-16 10:48:47.218923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.219003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.219057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.219068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:47.323 [2024-12-16 10:48:47.219077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.219085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.219171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.219181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:47.323 [2024-12-16 10:48:47.219194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.219206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.219239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.219249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:47.323 [2024-12-16 10:48:47.219258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.219265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.219307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.219324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:47.323 [2024-12-16 10:48:47.219334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.219342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.219393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.323 [2024-12-16 10:48:47.219405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:47.323 [2024-12-16 10:48:47.219415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.323 [2024-12-16 10:48:47.219424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.323 [2024-12-16 10:48:47.219576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.410 ms, result 0 00:16:47.896 00:16:47.896 00:16:47.896 10:48:47 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85283 00:16:47.896 10:48:47 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85283 00:16:47.896 10:48:47 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:47.896 10:48:47 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85283 ']' 00:16:47.896 10:48:47 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.896 10:48:47 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.896 10:48:47 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.896 10:48:47 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.896 10:48:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:47.896 [2024-12-16 10:48:47.771784] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:47.896 [2024-12-16 10:48:47.772080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85283 ] 00:16:48.157 [2024-12-16 10:48:47.907415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.157 [2024-12-16 10:48:47.949472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.728 10:48:48 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.728 10:48:48 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:48.728 10:48:48 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:48.989 [2024-12-16 10:48:48.810647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:48.989 [2024-12-16 10:48:48.810709] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.252 [2024-12-16 10:48:48.983948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.984116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:49.252 [2024-12-16 10:48:48.984140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:49.252 [2024-12-16 10:48:48.984150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.986429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.986464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.252 [2024-12-16 10:48:48.986477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:16:49.252 [2024-12-16 10:48:48.986487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.986553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:49.252 [2024-12-16 10:48:48.986775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:49.252 [2024-12-16 10:48:48.986789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.986801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.252 [2024-12-16 10:48:48.986819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:16:49.252 [2024-12-16 10:48:48.986829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.987926] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:49.252 [2024-12-16 10:48:48.990864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.990906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:49.252 [2024-12-16 10:48:48.990920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:16:49.252 [2024-12-16 10:48:48.990938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.990999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.991009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:49.252 [2024-12-16 10:48:48.991022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:49.252 [2024-12-16 10:48:48.991030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.995700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.995733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.252 [2024-12-16 10:48:48.995745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.607 ms 00:16:49.252 [2024-12-16 10:48:48.995753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.995853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.995865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.252 [2024-12-16 10:48:48.995880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:49.252 [2024-12-16 10:48:48.995888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.995916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.995924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:49.252 [2024-12-16 10:48:48.995948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:49.252 [2024-12-16 10:48:48.995958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.995983] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:49.252 [2024-12-16 10:48:48.997272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.997301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.252 [2024-12-16 10:48:48.997314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:16:49.252 [2024-12-16 10:48:48.997324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.997362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.997372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:49.252 [2024-12-16 10:48:48.997381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:49.252 [2024-12-16 10:48:48.997393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.997413] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:49.252 [2024-12-16 10:48:48.997431] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:49.252 [2024-12-16 10:48:48.997466] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:49.252 [2024-12-16 10:48:48.997484] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:49.252 [2024-12-16 10:48:48.997586] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:49.252 [2024-12-16 10:48:48.997599] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:49.252 [2024-12-16 10:48:48.997610] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:49.252 [2024-12-16 10:48:48.997623] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:49.252 [2024-12-16 10:48:48.997633] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:49.252 [2024-12-16 10:48:48.997644] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:49.252 [2024-12-16 10:48:48.997652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:49.252 [2024-12-16 10:48:48.997662] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:49.252 [2024-12-16 10:48:48.997670] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:49.252 [2024-12-16 10:48:48.997679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.997688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:49.252 [2024-12-16 10:48:48.997698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:16:49.252 [2024-12-16 10:48:48.997709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.997798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.252 [2024-12-16 10:48:48.997806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:49.252 [2024-12-16 10:48:48.997817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:49.252 [2024-12-16 10:48:48.997824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.252 [2024-12-16 10:48:48.997926] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:49.252 [2024-12-16 10:48:48.997953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:49.252 [2024-12-16 10:48:48.997967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.252 [2024-12-16 10:48:48.997979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.252 [2024-12-16 10:48:48.997992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:49.252 [2024-12-16 10:48:48.998001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:49.252 [2024-12-16 10:48:48.998020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:49.252 [2024-12-16 10:48:48.998030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.252 [2024-12-16 10:48:48.998047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:49.252 [2024-12-16 10:48:48.998056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:49.252 [2024-12-16 10:48:48.998084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.252 [2024-12-16 10:48:48.998092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:49.252 [2024-12-16 10:48:48.998101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:49.252 [2024-12-16 10:48:48.998110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:49.252 [2024-12-16 10:48:48.998128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:49.252 [2024-12-16 10:48:48.998137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:49.252 [2024-12-16 10:48:48.998157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.252 [2024-12-16 10:48:48.998175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:49.252 [2024-12-16 10:48:48.998183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.252 [2024-12-16 10:48:48.998202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:49.252 [2024-12-16 10:48:48.998212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.252 [2024-12-16 10:48:48.998230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:49.252 [2024-12-16 10:48:48.998246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.252 [2024-12-16 10:48:48.998263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:49.252 [2024-12-16 10:48:48.998275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.252 [2024-12-16 10:48:48.998292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:49.252 [2024-12-16 10:48:48.998300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:49.252 [2024-12-16 10:48:48.998311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.252 [2024-12-16 10:48:48.998320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:49.252 [2024-12-16 10:48:48.998329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:49.252 [2024-12-16 10:48:48.998337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:49.252 [2024-12-16 10:48:48.998353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:49.252 [2024-12-16 10:48:48.998363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998370] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:49.252 [2024-12-16 10:48:48.998379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:49.252 [2024-12-16 10:48:48.998387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.252 [2024-12-16 10:48:48.998397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.252 [2024-12-16 10:48:48.998405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:49.252 [2024-12-16 10:48:48.998413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:49.252 [2024-12-16 10:48:48.998421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:49.252 [2024-12-16 10:48:48.998430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:49.252 [2024-12-16 10:48:48.998437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:49.252 [2024-12-16 10:48:48.998448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:49.252 [2024-12-16 10:48:48.998456] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:49.252 [2024-12-16 10:48:48.998467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.252 [2024-12-16 10:48:48.998477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:49.252 [2024-12-16 10:48:48.998487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:49.252 [2024-12-16 10:48:48.998495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:49.253 [2024-12-16 10:48:48.998506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:49.253 [2024-12-16 10:48:48.998514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:49.253 [2024-12-16 10:48:48.998523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:49.253 [2024-12-16 10:48:48.998531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:49.253 [2024-12-16 10:48:48.998540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:49.253 [2024-12-16 10:48:48.998548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:49.253 [2024-12-16 10:48:48.998557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:49.253 [2024-12-16 10:48:48.998565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:49.253 [2024-12-16 10:48:48.998574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:49.253 [2024-12-16 10:48:48.998582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:49.253 [2024-12-16 10:48:48.998593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:49.253 [2024-12-16 10:48:48.998606] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:49.253 [2024-12-16 10:48:48.998616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.253 [2024-12-16 10:48:48.998625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:49.253 [2024-12-16 10:48:48.998634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:49.253 [2024-12-16 10:48:48.998642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:49.253 [2024-12-16 10:48:48.998652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:49.253 [2024-12-16 10:48:48.998660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:48.998671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:49.253 [2024-12-16 10:48:48.998680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:16:49.253 [2024-12-16 10:48:48.998689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.007164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.007197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.253 [2024-12-16 10:48:49.007207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.404 ms 00:16:49.253 [2024-12-16 10:48:49.007217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.007311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.007325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:49.253 [2024-12-16 10:48:49.007336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:49.253 [2024-12-16 10:48:49.007345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.015189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.015220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.253 [2024-12-16 10:48:49.015233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.824 ms 00:16:49.253 [2024-12-16 10:48:49.015249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.015299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.015313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.253 [2024-12-16 10:48:49.015328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:49.253 [2024-12-16 10:48:49.015337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.015635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.015655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.253 [2024-12-16 10:48:49.015664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:49.253 [2024-12-16 10:48:49.015674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.015796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.015809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.253 [2024-12-16 10:48:49.015820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:49.253 [2024-12-16 10:48:49.015831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.030974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.031024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.253 [2024-12-16 10:48:49.031043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.119 ms 00:16:49.253 [2024-12-16 10:48:49.031058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.033732] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:49.253 [2024-12-16 10:48:49.033778] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:49.253 [2024-12-16 10:48:49.033796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.033811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:49.253 [2024-12-16 10:48:49.033823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:16:49.253 [2024-12-16 10:48:49.033837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.048565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.048636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:49.253 [2024-12-16 10:48:49.048657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.671 ms 00:16:49.253 [2024-12-16 10:48:49.048671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.050609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.050640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:49.253 [2024-12-16 10:48:49.050650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:16:49.253 [2024-12-16 10:48:49.050660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.052492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.052521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:49.253 [2024-12-16 10:48:49.052530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:16:49.253 [2024-12-16 10:48:49.052540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.052879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.052896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:49.253 [2024-12-16 10:48:49.052908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:49.253 [2024-12-16 10:48:49.052921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.068214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.068255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:49.253 [2024-12-16 10:48:49.068267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.248 ms 00:16:49.253 [2024-12-16 10:48:49.068280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.075631] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:49.253 [2024-12-16 10:48:49.089094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.089124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:49.253 [2024-12-16 10:48:49.089139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.757 ms 00:16:49.253 [2024-12-16 10:48:49.089153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.089248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.089268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:49.253 [2024-12-16 10:48:49.089280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:49.253 [2024-12-16 10:48:49.089290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.089340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.089349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:49.253 [2024-12-16 10:48:49.089363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:49.253 [2024-12-16 10:48:49.089371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.089401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.089410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:49.253 [2024-12-16 10:48:49.089423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:49.253 [2024-12-16 10:48:49.089431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.089464] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:49.253 [2024-12-16 10:48:49.089477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.089486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:49.253 [2024-12-16 10:48:49.089495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:49.253 [2024-12-16 10:48:49.089504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.093288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.093397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:49.253 [2024-12-16 10:48:49.093412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:16:49.253 [2024-12-16 10:48:49.093422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.093506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.253 [2024-12-16 10:48:49.093519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:49.253 [2024-12-16 10:48:49.093528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:49.253 [2024-12-16 10:48:49.093538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.253 [2024-12-16 10:48:49.094309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:49.253 [2024-12-16 10:48:49.095264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.097 ms, result 0 00:16:49.253 [2024-12-16 10:48:49.096672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:49.254 Some configs were skipped because the RPC state that can call them passed over. 00:16:49.254 10:48:49 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:49.514 [2024-12-16 10:48:49.320817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.515 [2024-12-16 10:48:49.320956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:49.515 [2024-12-16 10:48:49.320977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:16:49.515 [2024-12-16 10:48:49.320986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.515 [2024-12-16 10:48:49.321023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.125 ms, result 0 00:16:49.515 true 00:16:49.515 10:48:49 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:49.777 [2024-12-16 10:48:49.520753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.777 [2024-12-16 10:48:49.520802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:49.777 [2024-12-16 10:48:49.520814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:16:49.777 [2024-12-16 10:48:49.520825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.777 [2024-12-16 10:48:49.520859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.736 ms, result 0 00:16:49.777 true 00:16:49.777 10:48:49 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85283 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85283 ']' 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85283 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85283 00:16:49.777 killing process with pid 85283 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85283' 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85283 00:16:49.777 10:48:49 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85283 00:16:49.777 [2024-12-16 10:48:49.655270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.777 [2024-12-16 10:48:49.655325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:49.777 [2024-12-16 10:48:49.655343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:49.777 [2024-12-16 10:48:49.655352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.655377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:49.778 [2024-12-16 10:48:49.655798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.655815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:49.778 [2024-12-16 10:48:49.655825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:16:49.778 [2024-12-16 10:48:49.655834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.656145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.656163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:49.778 [2024-12-16 10:48:49.656173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:16:49.778 [2024-12-16 10:48:49.656186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.660600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.660634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:49.778 [2024-12-16 10:48:49.660645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:16:49.778 [2024-12-16 10:48:49.660654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.667614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.667647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:49.778 [2024-12-16 10:48:49.667657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.927 ms 00:16:49.778 [2024-12-16 10:48:49.667669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.670214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.670331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:49.778 [2024-12-16 10:48:49.670346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.494 ms 00:16:49.778 [2024-12-16 10:48:49.670356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.674325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.674451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:49.778 [2024-12-16 10:48:49.674490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.914 ms 00:16:49.778 [2024-12-16 10:48:49.674521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.674879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.674925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:49.778 [2024-12-16 10:48:49.674981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:16:49.778 [2024-12-16 10:48:49.675019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.678470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.678564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:49.778 [2024-12-16 10:48:49.678594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.396 ms 00:16:49.778 [2024-12-16 10:48:49.678628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.681670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.682000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:49.778 [2024-12-16 10:48:49.682045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:16:49.778 [2024-12-16 10:48:49.682074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.684514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.684608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:49.778 [2024-12-16 10:48:49.684640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.269 ms 00:16:49.778 [2024-12-16 10:48:49.684672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.686550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.778 [2024-12-16 10:48:49.686582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:49.778 [2024-12-16 10:48:49.686592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.634 ms 00:16:49.778 [2024-12-16 10:48:49.686601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.778 [2024-12-16 10:48:49.686632] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:49.778 [2024-12-16 10:48:49.686648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.686989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:49.778 [2024-12-16 10:48:49.687185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:49.779 [2024-12-16 10:48:49.687638] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:49.779 [2024-12-16 10:48:49.687651] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f 00:16:49.779 [2024-12-16 10:48:49.687661] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:49.779 [2024-12-16 10:48:49.687669] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:49.779 [2024-12-16 10:48:49.687678] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:49.779 [2024-12-16 10:48:49.687688] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:49.779 [2024-12-16 10:48:49.687697] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:49.779 [2024-12-16 10:48:49.687705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:49.779 [2024-12-16 10:48:49.687714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:49.779 [2024-12-16 10:48:49.687722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:49.779 [2024-12-16 10:48:49.687732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:49.779 [2024-12-16 10:48:49.687740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.779 [2024-12-16 10:48:49.687752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:49.779 [2024-12-16 10:48:49.687761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:16:49.779 [2024-12-16 10:48:49.687772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.689120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.779 [2024-12-16 10:48:49.689224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:49.779 [2024-12-16 10:48:49.689239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:16:49.779 [2024-12-16 10:48:49.689248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.689328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.779 [2024-12-16 10:48:49.689340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:49.779 [2024-12-16 10:48:49.689348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:49.779 [2024-12-16 10:48:49.689358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.694359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.779 [2024-12-16 10:48:49.694397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.779 [2024-12-16 10:48:49.694407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.779 [2024-12-16 10:48:49.694416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.694492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.779 [2024-12-16 10:48:49.694504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.779 [2024-12-16 10:48:49.694512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.779 [2024-12-16 10:48:49.694524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.694566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.779 [2024-12-16 10:48:49.694578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.779 [2024-12-16 10:48:49.694587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.779 [2024-12-16 10:48:49.694597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.694616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.779 [2024-12-16 10:48:49.694625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.779 [2024-12-16 10:48:49.694634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.779 [2024-12-16 10:48:49.694643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.703176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.779 [2024-12-16 10:48:49.703220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.779 [2024-12-16 10:48:49.703232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.779 [2024-12-16 10:48:49.703241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.709975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.779 [2024-12-16 10:48:49.710016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.779 [2024-12-16 10:48:49.710028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.779 [2024-12-16 10:48:49.710041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.710097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.779 [2024-12-16 10:48:49.710115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.779 [2024-12-16 10:48:49.710125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.779 [2024-12-16 10:48:49.710138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.710170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.779 [2024-12-16 10:48:49.710181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.779 [2024-12-16 10:48:49.710190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.779 [2024-12-16 10:48:49.710201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.779 [2024-12-16 10:48:49.710269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.780 [2024-12-16 10:48:49.710282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.780 [2024-12-16 10:48:49.710291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.780 [2024-12-16 10:48:49.710302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.780 [2024-12-16 10:48:49.710334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.780 [2024-12-16 10:48:49.710350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:49.780 [2024-12-16 10:48:49.710359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.780 [2024-12-16 10:48:49.710371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.780 [2024-12-16 10:48:49.710411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.780 [2024-12-16 10:48:49.710422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.780 [2024-12-16 10:48:49.710432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.780 [2024-12-16 10:48:49.710442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.780 [2024-12-16 10:48:49.710489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.780 [2024-12-16 10:48:49.710502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.780 [2024-12-16 10:48:49.710512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.780 [2024-12-16 10:48:49.710522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.780 [2024-12-16 10:48:49.710651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.357 ms, result 0 00:16:50.041 10:48:49 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:50.041 10:48:49 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.041 [2024-12-16 10:48:49.960908] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:50.041 [2024-12-16 10:48:49.961031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85319 ] 00:16:50.303 [2024-12-16 10:48:50.094525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.303 [2024-12-16 10:48:50.126799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.303 [2024-12-16 10:48:50.210591] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:50.303 [2024-12-16 10:48:50.210649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:50.566 [2024-12-16 10:48:50.367483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.566 [2024-12-16 10:48:50.367526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:50.566 [2024-12-16 10:48:50.367539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:50.566 [2024-12-16 10:48:50.367546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.566 [2024-12-16 10:48:50.369783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.566 [2024-12-16 10:48:50.369916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:50.566 [2024-12-16 10:48:50.369944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:16:50.566 [2024-12-16 10:48:50.369952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.566 [2024-12-16 10:48:50.370018] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:50.566 [2024-12-16 10:48:50.370241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:50.566 [2024-12-16 10:48:50.370260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.566 [2024-12-16 10:48:50.370267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:50.566 [2024-12-16 10:48:50.370278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:16:50.566 [2024-12-16 10:48:50.370285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.566 [2024-12-16 10:48:50.371323] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:50.566 [2024-12-16 10:48:50.373423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.566 [2024-12-16 10:48:50.373454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:50.566 [2024-12-16 10:48:50.373467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:16:50.566 [2024-12-16 10:48:50.373476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.566 [2024-12-16 10:48:50.373532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.566 [2024-12-16 10:48:50.373542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:50.566 [2024-12-16 10:48:50.373550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:50.566 [2024-12-16 10:48:50.373556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.566 [2024-12-16 10:48:50.378151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.566 [2024-12-16 10:48:50.378177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:50.566 [2024-12-16 10:48:50.378186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.558 ms 00:16:50.566 [2024-12-16 10:48:50.378193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.566 [2024-12-16 10:48:50.378286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.566 [2024-12-16 10:48:50.378298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:50.566 [2024-12-16 10:48:50.378306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:50.566 [2024-12-16 10:48:50.378313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.566 [2024-12-16 10:48:50.378338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.567 [2024-12-16 10:48:50.378345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:50.567 [2024-12-16 10:48:50.378360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:50.567 [2024-12-16 10:48:50.378367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.567 [2024-12-16 10:48:50.378385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:50.567 [2024-12-16 10:48:50.379659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.567 [2024-12-16 10:48:50.379685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:50.567 [2024-12-16 10:48:50.379694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:16:50.567 [2024-12-16 10:48:50.379701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.567 [2024-12-16 10:48:50.379733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.567 [2024-12-16 10:48:50.379745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:50.567 [2024-12-16 10:48:50.379752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:50.567 [2024-12-16 10:48:50.379761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.567 [2024-12-16 10:48:50.379777] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:50.567 [2024-12-16 10:48:50.379796] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:50.567 [2024-12-16 10:48:50.379832] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:50.567 [2024-12-16 10:48:50.379849] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:50.567 [2024-12-16 10:48:50.379968] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:50.567 [2024-12-16 10:48:50.379979] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:50.567 [2024-12-16 10:48:50.379989] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:50.567 [2024-12-16 10:48:50.380002] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380010] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380021] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:50.567 [2024-12-16 10:48:50.380027] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:50.567 [2024-12-16 10:48:50.380034] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:50.567 [2024-12-16 10:48:50.380041] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:50.567 [2024-12-16 10:48:50.380048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.567 [2024-12-16 10:48:50.380055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:50.567 [2024-12-16 10:48:50.380066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:50.567 [2024-12-16 10:48:50.380072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.567 [2024-12-16 10:48:50.380158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.567 [2024-12-16 10:48:50.380166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:50.567 [2024-12-16 10:48:50.380174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:50.567 [2024-12-16 10:48:50.380180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.567 [2024-12-16 10:48:50.380276] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:50.567 [2024-12-16 10:48:50.380288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:50.567 [2024-12-16 10:48:50.380297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:50.567 [2024-12-16 10:48:50.380323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:50.567 [2024-12-16 10:48:50.380347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:50.567 [2024-12-16 10:48:50.380367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:50.567 [2024-12-16 10:48:50.380375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:50.567 [2024-12-16 10:48:50.380383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:50.567 [2024-12-16 10:48:50.380390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:50.567 [2024-12-16 10:48:50.380398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:50.567 [2024-12-16 10:48:50.380405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:50.567 [2024-12-16 10:48:50.380421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:50.567 [2024-12-16 10:48:50.380443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:50.567 [2024-12-16 10:48:50.380465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:50.567 [2024-12-16 10:48:50.380491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:50.567 [2024-12-16 10:48:50.380513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:50.567 [2024-12-16 10:48:50.380534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:50.567 [2024-12-16 10:48:50.380548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:50.567 [2024-12-16 10:48:50.380556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:50.567 [2024-12-16 10:48:50.380563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:50.567 [2024-12-16 10:48:50.380570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:50.567 [2024-12-16 10:48:50.380577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:50.567 [2024-12-16 10:48:50.380585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:50.567 [2024-12-16 10:48:50.380601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:50.567 [2024-12-16 10:48:50.380609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380617] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:50.567 [2024-12-16 10:48:50.380625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:50.567 [2024-12-16 10:48:50.380633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.567 [2024-12-16 10:48:50.380649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:50.567 [2024-12-16 10:48:50.380657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:50.567 [2024-12-16 10:48:50.380664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:50.567 [2024-12-16 10:48:50.380672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:50.567 [2024-12-16 10:48:50.380679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:50.567 [2024-12-16 10:48:50.380687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:50.567 [2024-12-16 10:48:50.380695] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:50.567 [2024-12-16 10:48:50.380705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:50.567 [2024-12-16 10:48:50.380717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:50.567 [2024-12-16 10:48:50.380725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:50.567 [2024-12-16 10:48:50.380735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:50.567 [2024-12-16 10:48:50.380743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:50.567 [2024-12-16 10:48:50.380751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:50.567 [2024-12-16 10:48:50.380759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:50.567 [2024-12-16 10:48:50.380767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:50.567 [2024-12-16 10:48:50.380798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:50.567 [2024-12-16 10:48:50.380805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:50.567 [2024-12-16 10:48:50.380812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:50.567 [2024-12-16 10:48:50.380820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:50.567 [2024-12-16 10:48:50.380826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:50.567 [2024-12-16 10:48:50.380833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:50.567 [2024-12-16 10:48:50.380840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:50.568 [2024-12-16 10:48:50.380847] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:50.568 [2024-12-16 10:48:50.380855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:50.568 [2024-12-16 10:48:50.380864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:50.568 [2024-12-16 10:48:50.380872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:50.568 [2024-12-16 10:48:50.380881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:50.568 [2024-12-16 10:48:50.380889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:50.568 [2024-12-16 10:48:50.380897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.380907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:50.568 [2024-12-16 10:48:50.380917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:16:50.568 [2024-12-16 10:48:50.380924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.397851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.397998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:50.568 [2024-12-16 10:48:50.398054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.624 ms 00:16:50.568 [2024-12-16 10:48:50.398079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.398223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.398251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:50.568 [2024-12-16 10:48:50.398279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:50.568 [2024-12-16 10:48:50.398338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.406056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.406151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:50.568 [2024-12-16 10:48:50.406196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.680 ms 00:16:50.568 [2024-12-16 10:48:50.406224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.406278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.406306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:50.568 [2024-12-16 10:48:50.406331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:50.568 [2024-12-16 10:48:50.406350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.406664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.406711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:50.568 [2024-12-16 10:48:50.406739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:16:50.568 [2024-12-16 10:48:50.406761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.406893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.406921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:50.568 [2024-12-16 10:48:50.406998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:50.568 [2024-12-16 10:48:50.407025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.411643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.411733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:50.568 [2024-12-16 10:48:50.411776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:16:50.568 [2024-12-16 10:48:50.411798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.414489] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:50.568 [2024-12-16 10:48:50.414599] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:50.568 [2024-12-16 10:48:50.414654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.414674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:50.568 [2024-12-16 10:48:50.414693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:16:50.568 [2024-12-16 10:48:50.414711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.429262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.429370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:50.568 [2024-12-16 10:48:50.429415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.482 ms 00:16:50.568 [2024-12-16 10:48:50.429437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.431819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.431945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:50.568 [2024-12-16 10:48:50.431999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:16:50.568 [2024-12-16 10:48:50.432022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.434007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.434109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:50.568 [2024-12-16 10:48:50.434157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:16:50.568 [2024-12-16 10:48:50.434186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.434516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.434553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:50.568 [2024-12-16 10:48:50.434685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:16:50.568 [2024-12-16 10:48:50.434719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.449424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.449554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:50.568 [2024-12-16 10:48:50.449604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.667 ms 00:16:50.568 [2024-12-16 10:48:50.449627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.457056] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:50.568 [2024-12-16 10:48:50.470839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.470962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:50.568 [2024-12-16 10:48:50.471011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.151 ms 00:16:50.568 [2024-12-16 10:48:50.471033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.471135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.471161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:50.568 [2024-12-16 10:48:50.471182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:50.568 [2024-12-16 10:48:50.471211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.471272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.471381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:50.568 [2024-12-16 10:48:50.471401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:50.568 [2024-12-16 10:48:50.471419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.471454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.471475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:50.568 [2024-12-16 10:48:50.471566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:50.568 [2024-12-16 10:48:50.471576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.471613] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:50.568 [2024-12-16 10:48:50.471625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.471633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:50.568 [2024-12-16 10:48:50.471641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:50.568 [2024-12-16 10:48:50.471648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.475737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.475771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:50.568 [2024-12-16 10:48:50.475782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:16:50.568 [2024-12-16 10:48:50.475791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.475866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.568 [2024-12-16 10:48:50.475878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:50.568 [2024-12-16 10:48:50.475886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:50.568 [2024-12-16 10:48:50.475896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.568 [2024-12-16 10:48:50.476642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:50.568 [2024-12-16 10:48:50.477661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.889 ms, result 0 00:16:50.568 [2024-12-16 10:48:50.478672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:50.568 [2024-12-16 10:48:50.487664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.513  [2024-12-16T10:48:52.890Z] Copying: 17/256 [MB] (17 MBps) [2024-12-16T10:48:53.829Z] Copying: 35/256 [MB] (18 MBps) [2024-12-16T10:48:54.772Z] Copying: 54/256 [MB] (18 MBps) [2024-12-16T10:48:55.720Z] Copying: 75/256 [MB] (21 MBps) [2024-12-16T10:48:56.666Z] Copying: 98/256 [MB] (22 MBps) [2024-12-16T10:48:57.611Z] Copying: 113/256 [MB] (15 MBps) [2024-12-16T10:48:58.556Z] Copying: 124/256 [MB] (11 MBps) [2024-12-16T10:48:59.500Z] Copying: 137836/262144 [kB] (9956 kBps) [2024-12-16T10:49:00.884Z] Copying: 144/256 [MB] (10 MBps) [2024-12-16T10:49:01.828Z] Copying: 155/256 [MB] (10 MBps) [2024-12-16T10:49:02.774Z] Copying: 166/256 [MB] (10 MBps) [2024-12-16T10:49:03.718Z] Copying: 176/256 [MB] (10 MBps) [2024-12-16T10:49:04.664Z] Copying: 190904/262144 [kB] (10216 kBps) [2024-12-16T10:49:05.613Z] Copying: 198/256 [MB] (11 MBps) [2024-12-16T10:49:06.558Z] Copying: 215/256 [MB] (17 MBps) [2024-12-16T10:49:07.503Z] Copying: 226/256 [MB] (10 MBps) [2024-12-16T10:49:08.893Z] Copying: 236/256 [MB] (10 MBps) [2024-12-16T10:49:08.893Z] Copying: 247/256 [MB] (10 MBps) [2024-12-16T10:49:08.893Z] Copying: 256/256 [MB] (average 14 MBps)[2024-12-16 10:49:08.692264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:08.904 [2024-12-16 10:49:08.693379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.904 [2024-12-16 10:49:08.693401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:08.904 [2024-12-16 10:49:08.693416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:08.904 [2024-12-16 10:49:08.693425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.904 [2024-12-16 10:49:08.693445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:08.904 [2024-12-16 10:49:08.693861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.904 [2024-12-16 10:49:08.693880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:08.904 [2024-12-16 10:49:08.693889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:17:08.904 [2024-12-16 10:49:08.693896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.904 [2024-12-16 10:49:08.694159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.904 [2024-12-16 10:49:08.694170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:08.904 [2024-12-16 10:49:08.694179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:08.904 [2024-12-16 10:49:08.694187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.904 [2024-12-16 10:49:08.697864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.904 [2024-12-16 10:49:08.697883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:08.904 [2024-12-16 10:49:08.697893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:17:08.904 [2024-12-16 10:49:08.697901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.904 [2024-12-16 10:49:08.704793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.904 [2024-12-16 10:49:08.704819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:08.904 [2024-12-16 10:49:08.704828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.857 ms 00:17:08.904 [2024-12-16 10:49:08.704835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.904 [2024-12-16 10:49:08.707103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.904 [2024-12-16 10:49:08.707231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:08.904 [2024-12-16 10:49:08.707246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:17:08.905 [2024-12-16 10:49:08.707253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.905 [2024-12-16 10:49:08.710922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.905 [2024-12-16 10:49:08.710970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:08.905 [2024-12-16 10:49:08.710979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:17:08.905 [2024-12-16 10:49:08.710986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.905 [2024-12-16 10:49:08.711104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.905 [2024-12-16 10:49:08.711113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:08.905 [2024-12-16 10:49:08.711127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:08.905 [2024-12-16 10:49:08.711133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.905 [2024-12-16 10:49:08.713732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.905 [2024-12-16 10:49:08.713762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:08.905 [2024-12-16 10:49:08.713770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:17:08.905 [2024-12-16 10:49:08.713777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.905 [2024-12-16 10:49:08.716019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.905 [2024-12-16 10:49:08.716048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:08.905 [2024-12-16 10:49:08.716056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:17:08.905 [2024-12-16 10:49:08.716062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.905 [2024-12-16 10:49:08.717950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.905 [2024-12-16 10:49:08.717978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:08.905 [2024-12-16 10:49:08.717986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:17:08.905 [2024-12-16 10:49:08.717992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.905 [2024-12-16 10:49:08.719587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.905 [2024-12-16 10:49:08.719618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:08.905 [2024-12-16 10:49:08.719627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:17:08.905 [2024-12-16 10:49:08.719635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.905 [2024-12-16 10:49:08.719664] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:08.905 [2024-12-16 10:49:08.719678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.719996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:08.905 [2024-12-16 10:49:08.720189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:08.906 [2024-12-16 10:49:08.720458] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:08.906 [2024-12-16 10:49:08.720469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f 00:17:08.906 [2024-12-16 10:49:08.720476] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:08.906 [2024-12-16 10:49:08.720483] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:08.906 [2024-12-16 10:49:08.720489] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:08.906 [2024-12-16 10:49:08.720496] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:08.906 [2024-12-16 10:49:08.720503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:08.906 [2024-12-16 10:49:08.720510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:08.906 [2024-12-16 10:49:08.720517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:08.906 [2024-12-16 10:49:08.720523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:08.906 [2024-12-16 10:49:08.720529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:08.906 [2024-12-16 10:49:08.720536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.906 [2024-12-16 10:49:08.720550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:08.906 [2024-12-16 10:49:08.720562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:17:08.906 [2024-12-16 10:49:08.720571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.722002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.906 [2024-12-16 10:49:08.722023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:08.906 [2024-12-16 10:49:08.722032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:17:08.906 [2024-12-16 10:49:08.722038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.722122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.906 [2024-12-16 10:49:08.722130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:08.906 [2024-12-16 10:49:08.722138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:08.906 [2024-12-16 10:49:08.722145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.726726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.726757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.906 [2024-12-16 10:49:08.726766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.726773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.726835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.726842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.906 [2024-12-16 10:49:08.726850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.726856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.726892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.726900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.906 [2024-12-16 10:49:08.726912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.726919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.726952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.726962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.906 [2024-12-16 10:49:08.726969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.726976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.735864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.735900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.906 [2024-12-16 10:49:08.735910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.735919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.742943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.742980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.906 [2024-12-16 10:49:08.742990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.742998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.743023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.743036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.906 [2024-12-16 10:49:08.743043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.743051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.743078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.743086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.906 [2024-12-16 10:49:08.743096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.743103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.743168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.743177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.906 [2024-12-16 10:49:08.743185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.743192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.743223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.743232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:08.906 [2024-12-16 10:49:08.743240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.743249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.743293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.743301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.906 [2024-12-16 10:49:08.743312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.743318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.743360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.906 [2024-12-16 10:49:08.743368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.906 [2024-12-16 10:49:08.743378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.906 [2024-12-16 10:49:08.743385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.906 [2024-12-16 10:49:08.743513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.110 ms, result 0 00:17:09.168 00:17:09.168 00:17:09.168 10:49:08 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:09.168 10:49:08 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:09.752 10:49:09 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:09.752 [2024-12-16 10:49:09.564874] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:09.752 [2024-12-16 10:49:09.565243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85530 ] 00:17:09.752 [2024-12-16 10:49:09.703849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.013 [2024-12-16 10:49:09.753174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.013 [2024-12-16 10:49:09.861636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.013 [2024-12-16 10:49:09.861713] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.276 [2024-12-16 10:49:10.023167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.023476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:10.276 [2024-12-16 10:49:10.023514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:10.276 [2024-12-16 10:49:10.023528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.026573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.026638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.276 [2024-12-16 10:49:10.026655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.012 ms 00:17:10.276 [2024-12-16 10:49:10.026664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.026835] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:10.276 [2024-12-16 10:49:10.027179] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:10.276 [2024-12-16 10:49:10.027200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.027210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.276 [2024-12-16 10:49:10.027224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:10.276 [2024-12-16 10:49:10.027233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.029135] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:10.276 [2024-12-16 10:49:10.033132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.033345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:10.276 [2024-12-16 10:49:10.033375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.999 ms 00:17:10.276 [2024-12-16 10:49:10.033386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.033472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.033483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:10.276 [2024-12-16 10:49:10.033492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:10.276 [2024-12-16 10:49:10.033501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.043341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.043396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.276 [2024-12-16 10:49:10.043412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.793 ms 00:17:10.276 [2024-12-16 10:49:10.043423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.043626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.043641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.276 [2024-12-16 10:49:10.043654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:10.276 [2024-12-16 10:49:10.043666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.043708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.043721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:10.276 [2024-12-16 10:49:10.043735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:10.276 [2024-12-16 10:49:10.043746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.043778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:10.276 [2024-12-16 10:49:10.046381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.046432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.276 [2024-12-16 10:49:10.046446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:17:10.276 [2024-12-16 10:49:10.046457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.046527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.046544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:10.276 [2024-12-16 10:49:10.046560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:10.276 [2024-12-16 10:49:10.046571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.046601] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:10.276 [2024-12-16 10:49:10.046632] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:10.276 [2024-12-16 10:49:10.046696] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:10.276 [2024-12-16 10:49:10.046722] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:10.276 [2024-12-16 10:49:10.046861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:10.276 [2024-12-16 10:49:10.046877] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:10.276 [2024-12-16 10:49:10.046892] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:10.276 [2024-12-16 10:49:10.046907] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:10.276 [2024-12-16 10:49:10.046920] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:10.276 [2024-12-16 10:49:10.046955] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:10.276 [2024-12-16 10:49:10.046973] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:10.276 [2024-12-16 10:49:10.046985] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:10.276 [2024-12-16 10:49:10.046996] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:10.276 [2024-12-16 10:49:10.047007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.047021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:10.276 [2024-12-16 10:49:10.047036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:17:10.276 [2024-12-16 10:49:10.047046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.047169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.276 [2024-12-16 10:49:10.047188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:10.276 [2024-12-16 10:49:10.047201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:10.276 [2024-12-16 10:49:10.047211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.276 [2024-12-16 10:49:10.047349] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:10.276 [2024-12-16 10:49:10.047367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:10.276 [2024-12-16 10:49:10.047384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.276 [2024-12-16 10:49:10.047398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.276 [2024-12-16 10:49:10.047410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:10.276 [2024-12-16 10:49:10.047419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:10.276 [2024-12-16 10:49:10.047430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:10.276 [2024-12-16 10:49:10.047440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:10.276 [2024-12-16 10:49:10.047456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:10.276 [2024-12-16 10:49:10.047466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.277 [2024-12-16 10:49:10.047476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:10.277 [2024-12-16 10:49:10.047488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:10.277 [2024-12-16 10:49:10.047498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.277 [2024-12-16 10:49:10.047507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:10.277 [2024-12-16 10:49:10.047518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:10.277 [2024-12-16 10:49:10.047526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:10.277 [2024-12-16 10:49:10.047544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:10.277 [2024-12-16 10:49:10.047555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:10.277 [2024-12-16 10:49:10.047577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.277 [2024-12-16 10:49:10.047600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:10.277 [2024-12-16 10:49:10.047612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.277 [2024-12-16 10:49:10.047636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:10.277 [2024-12-16 10:49:10.047645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.277 [2024-12-16 10:49:10.047663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:10.277 [2024-12-16 10:49:10.047672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.277 [2024-12-16 10:49:10.047691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:10.277 [2024-12-16 10:49:10.047701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.277 [2024-12-16 10:49:10.047718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:10.277 [2024-12-16 10:49:10.047728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:10.277 [2024-12-16 10:49:10.047737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.277 [2024-12-16 10:49:10.047747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:10.277 [2024-12-16 10:49:10.047760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:10.277 [2024-12-16 10:49:10.047779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:10.277 [2024-12-16 10:49:10.047814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:10.277 [2024-12-16 10:49:10.047825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047837] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:10.277 [2024-12-16 10:49:10.047851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:10.277 [2024-12-16 10:49:10.047862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.277 [2024-12-16 10:49:10.047875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.277 [2024-12-16 10:49:10.047893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:10.277 [2024-12-16 10:49:10.047903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:10.277 [2024-12-16 10:49:10.047913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:10.277 [2024-12-16 10:49:10.047923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:10.277 [2024-12-16 10:49:10.047948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:10.277 [2024-12-16 10:49:10.047958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:10.277 [2024-12-16 10:49:10.047970] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:10.277 [2024-12-16 10:49:10.047983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.277 [2024-12-16 10:49:10.047996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:10.277 [2024-12-16 10:49:10.048025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:10.277 [2024-12-16 10:49:10.048041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:10.277 [2024-12-16 10:49:10.048052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:10.277 [2024-12-16 10:49:10.048063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:10.277 [2024-12-16 10:49:10.048074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:10.277 [2024-12-16 10:49:10.048084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:10.277 [2024-12-16 10:49:10.048101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:10.277 [2024-12-16 10:49:10.048112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:10.277 [2024-12-16 10:49:10.048122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:10.277 [2024-12-16 10:49:10.048133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:10.277 [2024-12-16 10:49:10.048143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:10.277 [2024-12-16 10:49:10.048153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:10.277 [2024-12-16 10:49:10.048165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:10.277 [2024-12-16 10:49:10.048177] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:10.277 [2024-12-16 10:49:10.048189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.277 [2024-12-16 10:49:10.048202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:10.277 [2024-12-16 10:49:10.048215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:10.277 [2024-12-16 10:49:10.048225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:10.277 [2024-12-16 10:49:10.048235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:10.277 [2024-12-16 10:49:10.048245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-12-16 10:49:10.048257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:10.277 [2024-12-16 10:49:10.048273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:17:10.277 [2024-12-16 10:49:10.048285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-12-16 10:49:10.076434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-12-16 10:49:10.076507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.277 [2024-12-16 10:49:10.076528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.041 ms 00:17:10.277 [2024-12-16 10:49:10.076541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-12-16 10:49:10.076798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-12-16 10:49:10.076837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:10.277 [2024-12-16 10:49:10.076865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:10.277 [2024-12-16 10:49:10.076878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-12-16 10:49:10.091263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-12-16 10:49:10.091320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.277 [2024-12-16 10:49:10.091346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.345 ms 00:17:10.277 [2024-12-16 10:49:10.091359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-12-16 10:49:10.091477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-12-16 10:49:10.091497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.277 [2024-12-16 10:49:10.091511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:10.277 [2024-12-16 10:49:10.091524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-12-16 10:49:10.092096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-12-16 10:49:10.092134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.277 [2024-12-16 10:49:10.092152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:17:10.277 [2024-12-16 10:49:10.092167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-12-16 10:49:10.092389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-12-16 10:49:10.092417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.277 [2024-12-16 10:49:10.092436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:17:10.277 [2024-12-16 10:49:10.092454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-12-16 10:49:10.101065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-12-16 10:49:10.101249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.277 [2024-12-16 10:49:10.101268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.571 ms 00:17:10.277 [2024-12-16 10:49:10.101277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-12-16 10:49:10.105333] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:10.277 [2024-12-16 10:49:10.105380] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:10.278 [2024-12-16 10:49:10.105393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.105403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:10.278 [2024-12-16 10:49:10.105413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.999 ms 00:17:10.278 [2024-12-16 10:49:10.105420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.121483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.121528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:10.278 [2024-12-16 10:49:10.121540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.982 ms 00:17:10.278 [2024-12-16 10:49:10.121549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.124502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.124671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:10.278 [2024-12-16 10:49:10.124690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:17:10.278 [2024-12-16 10:49:10.124698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.127691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.127856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:10.278 [2024-12-16 10:49:10.127884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:17:10.278 [2024-12-16 10:49:10.127892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.128360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.128397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:10.278 [2024-12-16 10:49:10.128409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:10.278 [2024-12-16 10:49:10.128417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.151289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.151361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:10.278 [2024-12-16 10:49:10.151376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.845 ms 00:17:10.278 [2024-12-16 10:49:10.151385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.159380] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:10.278 [2024-12-16 10:49:10.177621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.177671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:10.278 [2024-12-16 10:49:10.177686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.134 ms 00:17:10.278 [2024-12-16 10:49:10.177704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.177792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.177804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:10.278 [2024-12-16 10:49:10.177814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:10.278 [2024-12-16 10:49:10.177835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.177894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.177904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:10.278 [2024-12-16 10:49:10.177912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:10.278 [2024-12-16 10:49:10.177920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.178076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.178094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:10.278 [2024-12-16 10:49:10.178104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:10.278 [2024-12-16 10:49:10.178112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.178159] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:10.278 [2024-12-16 10:49:10.178174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.178182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:10.278 [2024-12-16 10:49:10.178191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:10.278 [2024-12-16 10:49:10.178199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.183863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.183911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:10.278 [2024-12-16 10:49:10.183923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.638 ms 00:17:10.278 [2024-12-16 10:49:10.183953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.184048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.278 [2024-12-16 10:49:10.184062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:10.278 [2024-12-16 10:49:10.184077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:10.278 [2024-12-16 10:49:10.184088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.278 [2024-12-16 10:49:10.185083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.278 [2024-12-16 10:49:10.186455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.581 ms, result 0 00:17:10.278 [2024-12-16 10:49:10.187528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.278 [2024-12-16 10:49:10.195149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.542  [2024-12-16T10:49:10.531Z] Copying: 4096/4096 [kB] (average 14 MBps)[2024-12-16 10:49:10.480850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.542 [2024-12-16 10:49:10.482346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.482396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:10.542 [2024-12-16 10:49:10.482413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.542 [2024-12-16 10:49:10.482423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.482451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:10.542 [2024-12-16 10:49:10.483123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.483161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:10.542 [2024-12-16 10:49:10.483173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:17:10.542 [2024-12-16 10:49:10.483183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.485183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.485225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:10.542 [2024-12-16 10:49:10.485235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.975 ms 00:17:10.542 [2024-12-16 10:49:10.485243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.489719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.489755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:10.542 [2024-12-16 10:49:10.489765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:17:10.542 [2024-12-16 10:49:10.489774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.496739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.496800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:10.542 [2024-12-16 10:49:10.496810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.933 ms 00:17:10.542 [2024-12-16 10:49:10.496818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.499801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.499992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:10.542 [2024-12-16 10:49:10.500011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:17:10.542 [2024-12-16 10:49:10.500018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.504250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.504305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:10.542 [2024-12-16 10:49:10.504316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.158 ms 00:17:10.542 [2024-12-16 10:49:10.504324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.504453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.504464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:10.542 [2024-12-16 10:49:10.504473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:10.542 [2024-12-16 10:49:10.504481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.507382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.507538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:10.542 [2024-12-16 10:49:10.507555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:17:10.542 [2024-12-16 10:49:10.507562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.510410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.510453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:10.542 [2024-12-16 10:49:10.510463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:17:10.542 [2024-12-16 10:49:10.510470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.512607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.512650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:10.542 [2024-12-16 10:49:10.512660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:17:10.542 [2024-12-16 10:49:10.512666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.514961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.542 [2024-12-16 10:49:10.515000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:10.542 [2024-12-16 10:49:10.515010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:17:10.542 [2024-12-16 10:49:10.515018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.542 [2024-12-16 10:49:10.515058] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:10.542 [2024-12-16 10:49:10.515074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:10.542 [2024-12-16 10:49:10.515084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:10.542 [2024-12-16 10:49:10.515092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:10.542 [2024-12-16 10:49:10.515100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:10.542 [2024-12-16 10:49:10.515108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:10.542 [2024-12-16 10:49:10.515115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:10.542 [2024-12-16 10:49:10.515123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:10.542 [2024-12-16 10:49:10.515130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:10.543 [2024-12-16 10:49:10.515778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:10.544 [2024-12-16 10:49:10.515857] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:10.544 [2024-12-16 10:49:10.515865] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f 00:17:10.544 [2024-12-16 10:49:10.515873] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:10.544 [2024-12-16 10:49:10.515880] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:10.544 [2024-12-16 10:49:10.515887] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:10.544 [2024-12-16 10:49:10.515895] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:10.544 [2024-12-16 10:49:10.515901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:10.544 [2024-12-16 10:49:10.515910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:10.544 [2024-12-16 10:49:10.515922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:10.544 [2024-12-16 10:49:10.515950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:10.544 [2024-12-16 10:49:10.515957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:10.544 [2024-12-16 10:49:10.515964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.544 [2024-12-16 10:49:10.515975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:10.544 [2024-12-16 10:49:10.515983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:17:10.544 [2024-12-16 10:49:10.515991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.544 [2024-12-16 10:49:10.518147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.544 [2024-12-16 10:49:10.518295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:10.544 [2024-12-16 10:49:10.518320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.138 ms 00:17:10.544 [2024-12-16 10:49:10.518329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.544 [2024-12-16 10:49:10.518447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.544 [2024-12-16 10:49:10.518457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:10.544 [2024-12-16 10:49:10.518467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:10.544 [2024-12-16 10:49:10.518475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.544 [2024-12-16 10:49:10.525462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.544 [2024-12-16 10:49:10.525619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.544 [2024-12-16 10:49:10.525637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.544 [2024-12-16 10:49:10.525654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.544 [2024-12-16 10:49:10.525771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.544 [2024-12-16 10:49:10.525786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.544 [2024-12-16 10:49:10.525798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.544 [2024-12-16 10:49:10.525806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.544 [2024-12-16 10:49:10.525854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.544 [2024-12-16 10:49:10.525864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.544 [2024-12-16 10:49:10.525872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.544 [2024-12-16 10:49:10.525880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.544 [2024-12-16 10:49:10.525901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.544 [2024-12-16 10:49:10.525911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.544 [2024-12-16 10:49:10.525919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.544 [2024-12-16 10:49:10.525967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.539584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.806 [2024-12-16 10:49:10.539636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.806 [2024-12-16 10:49:10.539647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.806 [2024-12-16 10:49:10.539657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.550561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.806 [2024-12-16 10:49:10.550614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.806 [2024-12-16 10:49:10.550637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.806 [2024-12-16 10:49:10.550650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.550702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.806 [2024-12-16 10:49:10.550711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.806 [2024-12-16 10:49:10.550720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.806 [2024-12-16 10:49:10.550729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.550764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.806 [2024-12-16 10:49:10.550772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.806 [2024-12-16 10:49:10.550784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.806 [2024-12-16 10:49:10.550792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.550864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.806 [2024-12-16 10:49:10.550875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.806 [2024-12-16 10:49:10.550883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.806 [2024-12-16 10:49:10.550891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.550923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.806 [2024-12-16 10:49:10.550957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:10.806 [2024-12-16 10:49:10.550966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.806 [2024-12-16 10:49:10.550978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.551036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.806 [2024-12-16 10:49:10.551046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.806 [2024-12-16 10:49:10.551055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.806 [2024-12-16 10:49:10.551063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.551135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.806 [2024-12-16 10:49:10.551146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.806 [2024-12-16 10:49:10.551158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.806 [2024-12-16 10:49:10.551167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.806 [2024-12-16 10:49:10.551319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.944 ms, result 0 00:17:10.806 00:17:10.806 00:17:10.806 10:49:10 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85550 00:17:10.806 10:49:10 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85550 00:17:10.806 10:49:10 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:10.806 10:49:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85550 ']' 00:17:10.806 10:49:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.806 10:49:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.806 10:49:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.806 10:49:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.806 10:49:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:11.067 [2024-12-16 10:49:10.874127] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:11.067 [2024-12-16 10:49:10.875074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85550 ] 00:17:11.067 [2024-12-16 10:49:11.013329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.328 [2024-12-16 10:49:11.065527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.900 10:49:11 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.900 10:49:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:11.900 10:49:11 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:12.162 [2024-12-16 10:49:11.940130] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:12.162 [2024-12-16 10:49:11.940216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:12.162 [2024-12-16 10:49:12.115563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.115612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:12.162 [2024-12-16 10:49:12.115624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:12.162 [2024-12-16 10:49:12.115634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.117869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.117905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.162 [2024-12-16 10:49:12.117919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:17:12.162 [2024-12-16 10:49:12.117940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.118005] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:12.162 [2024-12-16 10:49:12.118233] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:12.162 [2024-12-16 10:49:12.118246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.118255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.162 [2024-12-16 10:49:12.118268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:17:12.162 [2024-12-16 10:49:12.118276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.119593] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:12.162 [2024-12-16 10:49:12.122266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.122302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:12.162 [2024-12-16 10:49:12.122314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.671 ms 00:17:12.162 [2024-12-16 10:49:12.122322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.122377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.122387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:12.162 [2024-12-16 10:49:12.122403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:12.162 [2024-12-16 10:49:12.122410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.127000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.127021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.162 [2024-12-16 10:49:12.127032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.541 ms 00:17:12.162 [2024-12-16 10:49:12.127039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.127134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.127144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.162 [2024-12-16 10:49:12.127154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:12.162 [2024-12-16 10:49:12.127161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.127188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.127197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:12.162 [2024-12-16 10:49:12.127206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:12.162 [2024-12-16 10:49:12.127218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.127243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:12.162 [2024-12-16 10:49:12.128562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.128592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.162 [2024-12-16 10:49:12.128601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.327 ms 00:17:12.162 [2024-12-16 10:49:12.128610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.128643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.128652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:12.162 [2024-12-16 10:49:12.128660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:12.162 [2024-12-16 10:49:12.128668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.128688] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:12.162 [2024-12-16 10:49:12.128706] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:12.162 [2024-12-16 10:49:12.128739] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:12.162 [2024-12-16 10:49:12.128757] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:12.162 [2024-12-16 10:49:12.128865] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:12.162 [2024-12-16 10:49:12.128877] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:12.162 [2024-12-16 10:49:12.128892] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:12.162 [2024-12-16 10:49:12.128903] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:12.162 [2024-12-16 10:49:12.128912] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:12.162 [2024-12-16 10:49:12.128925] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:12.162 [2024-12-16 10:49:12.128950] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:12.162 [2024-12-16 10:49:12.128959] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:12.162 [2024-12-16 10:49:12.128966] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:12.162 [2024-12-16 10:49:12.128975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.128984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:12.162 [2024-12-16 10:49:12.128994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:12.162 [2024-12-16 10:49:12.129000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.129092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.162 [2024-12-16 10:49:12.129099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:12.162 [2024-12-16 10:49:12.129109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:12.162 [2024-12-16 10:49:12.129115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.162 [2024-12-16 10:49:12.129219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:12.162 [2024-12-16 10:49:12.129229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:12.162 [2024-12-16 10:49:12.129243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.162 [2024-12-16 10:49:12.129252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.162 [2024-12-16 10:49:12.129263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:12.162 [2024-12-16 10:49:12.129271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:12.162 [2024-12-16 10:49:12.129281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:12.162 [2024-12-16 10:49:12.129289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:12.162 [2024-12-16 10:49:12.129298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:12.162 [2024-12-16 10:49:12.129305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.163 [2024-12-16 10:49:12.129315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:12.163 [2024-12-16 10:49:12.129322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:12.163 [2024-12-16 10:49:12.129331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.163 [2024-12-16 10:49:12.129339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:12.163 [2024-12-16 10:49:12.129348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:12.163 [2024-12-16 10:49:12.129355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:12.163 [2024-12-16 10:49:12.129371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:12.163 [2024-12-16 10:49:12.129380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:12.163 [2024-12-16 10:49:12.129399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.163 [2024-12-16 10:49:12.129415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:12.163 [2024-12-16 10:49:12.129422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.163 [2024-12-16 10:49:12.129438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:12.163 [2024-12-16 10:49:12.129447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.163 [2024-12-16 10:49:12.129463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:12.163 [2024-12-16 10:49:12.129471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.163 [2024-12-16 10:49:12.129488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:12.163 [2024-12-16 10:49:12.129496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.163 [2024-12-16 10:49:12.129515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:12.163 [2024-12-16 10:49:12.129522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:12.163 [2024-12-16 10:49:12.129532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.163 [2024-12-16 10:49:12.129539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:12.163 [2024-12-16 10:49:12.129548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:12.163 [2024-12-16 10:49:12.129555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:12.163 [2024-12-16 10:49:12.129572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:12.163 [2024-12-16 10:49:12.129581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129588] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:12.163 [2024-12-16 10:49:12.129599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:12.163 [2024-12-16 10:49:12.129607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.163 [2024-12-16 10:49:12.129616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.163 [2024-12-16 10:49:12.129624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:12.163 [2024-12-16 10:49:12.129634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:12.163 [2024-12-16 10:49:12.129642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:12.163 [2024-12-16 10:49:12.129652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:12.163 [2024-12-16 10:49:12.129659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:12.163 [2024-12-16 10:49:12.129669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:12.163 [2024-12-16 10:49:12.129677] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:12.163 [2024-12-16 10:49:12.129688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.163 [2024-12-16 10:49:12.129696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:12.163 [2024-12-16 10:49:12.129704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:12.163 [2024-12-16 10:49:12.129711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:12.163 [2024-12-16 10:49:12.129720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:12.163 [2024-12-16 10:49:12.129727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:12.163 [2024-12-16 10:49:12.129736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:12.163 [2024-12-16 10:49:12.129743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:12.163 [2024-12-16 10:49:12.129757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:12.163 [2024-12-16 10:49:12.129767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:12.163 [2024-12-16 10:49:12.129776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:12.163 [2024-12-16 10:49:12.129783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:12.163 [2024-12-16 10:49:12.129796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:12.163 [2024-12-16 10:49:12.129802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:12.163 [2024-12-16 10:49:12.129812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:12.163 [2024-12-16 10:49:12.129830] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:12.163 [2024-12-16 10:49:12.129839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.163 [2024-12-16 10:49:12.129847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:12.163 [2024-12-16 10:49:12.129862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:12.163 [2024-12-16 10:49:12.129869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:12.163 [2024-12-16 10:49:12.129881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:12.163 [2024-12-16 10:49:12.129888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.163 [2024-12-16 10:49:12.129902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:12.163 [2024-12-16 10:49:12.129910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:17:12.163 [2024-12-16 10:49:12.129918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.163 [2024-12-16 10:49:12.138258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.163 [2024-12-16 10:49:12.138385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.163 [2024-12-16 10:49:12.138399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.253 ms 00:17:12.163 [2024-12-16 10:49:12.138412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.163 [2024-12-16 10:49:12.138509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.163 [2024-12-16 10:49:12.138521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:12.163 [2024-12-16 10:49:12.138531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:12.163 [2024-12-16 10:49:12.138540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.163 [2024-12-16 10:49:12.146502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.163 [2024-12-16 10:49:12.146533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.163 [2024-12-16 10:49:12.146542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.940 ms 00:17:12.163 [2024-12-16 10:49:12.146550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.163 [2024-12-16 10:49:12.146602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.163 [2024-12-16 10:49:12.146615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.163 [2024-12-16 10:49:12.146622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:12.163 [2024-12-16 10:49:12.146635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.163 [2024-12-16 10:49:12.146951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.163 [2024-12-16 10:49:12.146972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.163 [2024-12-16 10:49:12.146980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:12.163 [2024-12-16 10:49:12.146989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.163 [2024-12-16 10:49:12.147112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.163 [2024-12-16 10:49:12.147127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.163 [2024-12-16 10:49:12.147137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:12.163 [2024-12-16 10:49:12.147146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.163261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.163317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.425 [2024-12-16 10:49:12.163336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.091 ms 00:17:12.425 [2024-12-16 10:49:12.163351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.166570] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:12.425 [2024-12-16 10:49:12.166619] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:12.425 [2024-12-16 10:49:12.166636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.166651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:12.425 [2024-12-16 10:49:12.166663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.109 ms 00:17:12.425 [2024-12-16 10:49:12.166677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.181721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.181753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:12.425 [2024-12-16 10:49:12.181764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.987 ms 00:17:12.425 [2024-12-16 10:49:12.181775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.183766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.183798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:12.425 [2024-12-16 10:49:12.183806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:17:12.425 [2024-12-16 10:49:12.183815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.185394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.185423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:12.425 [2024-12-16 10:49:12.185432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:17:12.425 [2024-12-16 10:49:12.185440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.185755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.185771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:12.425 [2024-12-16 10:49:12.185781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:12.425 [2024-12-16 10:49:12.185789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.200685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.200733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:12.425 [2024-12-16 10:49:12.200743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.875 ms 00:17:12.425 [2024-12-16 10:49:12.200760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.208194] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:12.425 [2024-12-16 10:49:12.221716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.221750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:12.425 [2024-12-16 10:49:12.221763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.889 ms 00:17:12.425 [2024-12-16 10:49:12.221770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.221858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.221874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:12.425 [2024-12-16 10:49:12.221884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:12.425 [2024-12-16 10:49:12.221894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-12-16 10:49:12.221963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-12-16 10:49:12.221977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:12.425 [2024-12-16 10:49:12.221991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:12.426 [2024-12-16 10:49:12.221999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.426 [2024-12-16 10:49:12.222027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.426 [2024-12-16 10:49:12.222037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:12.426 [2024-12-16 10:49:12.222051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:12.426 [2024-12-16 10:49:12.222061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.426 [2024-12-16 10:49:12.222094] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:12.426 [2024-12-16 10:49:12.222102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.426 [2024-12-16 10:49:12.222111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:12.426 [2024-12-16 10:49:12.222118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:12.426 [2024-12-16 10:49:12.222127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.426 [2024-12-16 10:49:12.225772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.426 [2024-12-16 10:49:12.225806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:12.426 [2024-12-16 10:49:12.225815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:17:12.426 [2024-12-16 10:49:12.225825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.426 [2024-12-16 10:49:12.225907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.426 [2024-12-16 10:49:12.225923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:12.426 [2024-12-16 10:49:12.225943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:12.426 [2024-12-16 10:49:12.225952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.426 [2024-12-16 10:49:12.226691] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:12.426 [2024-12-16 10:49:12.227649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.865 ms, result 0 00:17:12.426 [2024-12-16 10:49:12.229595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:12.426 Some configs were skipped because the RPC state that can call them passed over. 00:17:12.426 10:49:12 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:12.687 [2024-12-16 10:49:12.457205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.687 [2024-12-16 10:49:12.457342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:12.687 [2024-12-16 10:49:12.457398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:17:12.688 [2024-12-16 10:49:12.457421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.688 [2024-12-16 10:49:12.457478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.875 ms, result 0 00:17:12.688 true 00:17:12.688 10:49:12 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:12.688 [2024-12-16 10:49:12.668499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.688 [2024-12-16 10:49:12.668633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:12.688 [2024-12-16 10:49:12.668690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:17:12.688 [2024-12-16 10:49:12.668715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.688 [2024-12-16 10:49:12.668765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.435 ms, result 0 00:17:12.688 true 00:17:12.950 10:49:12 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85550 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85550 ']' 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85550 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85550 00:17:12.950 killing process with pid 85550 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85550' 00:17:12.950 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85550 00:17:12.951 10:49:12 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85550 00:17:12.951 [2024-12-16 10:49:12.799310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.799361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:12.951 [2024-12-16 10:49:12.799376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:12.951 [2024-12-16 10:49:12.799384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.799410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:12.951 [2024-12-16 10:49:12.799830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.799848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:12.951 [2024-12-16 10:49:12.799856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:12.951 [2024-12-16 10:49:12.799865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.800160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.800173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:12.951 [2024-12-16 10:49:12.800182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:12.951 [2024-12-16 10:49:12.800194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.804669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.804700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:12.951 [2024-12-16 10:49:12.804709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.455 ms 00:17:12.951 [2024-12-16 10:49:12.804719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.811687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.811719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:12.951 [2024-12-16 10:49:12.811727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.936 ms 00:17:12.951 [2024-12-16 10:49:12.811738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.814037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.814071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:12.951 [2024-12-16 10:49:12.814079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:17:12.951 [2024-12-16 10:49:12.814088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.817319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.817353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:12.951 [2024-12-16 10:49:12.817362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:17:12.951 [2024-12-16 10:49:12.817371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.817495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.817506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:12.951 [2024-12-16 10:49:12.817514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:12.951 [2024-12-16 10:49:12.817523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.819200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.819335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:12.951 [2024-12-16 10:49:12.819349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:17:12.951 [2024-12-16 10:49:12.819359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.820807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.820835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:12.951 [2024-12-16 10:49:12.820844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:17:12.951 [2024-12-16 10:49:12.820852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.822078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.822108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:12.951 [2024-12-16 10:49:12.822117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:17:12.951 [2024-12-16 10:49:12.822128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.823227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.951 [2024-12-16 10:49:12.823259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:12.951 [2024-12-16 10:49:12.823268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:17:12.951 [2024-12-16 10:49:12.823276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.951 [2024-12-16 10:49:12.823304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:12.951 [2024-12-16 10:49:12.823320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:12.951 [2024-12-16 10:49:12.823726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.823995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:12.952 [2024-12-16 10:49:12.824171] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:12.952 [2024-12-16 10:49:12.824179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f 00:17:12.952 [2024-12-16 10:49:12.824192] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:12.952 [2024-12-16 10:49:12.824198] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:12.952 [2024-12-16 10:49:12.824206] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:12.952 [2024-12-16 10:49:12.824215] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:12.952 [2024-12-16 10:49:12.824223] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:12.952 [2024-12-16 10:49:12.824230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:12.952 [2024-12-16 10:49:12.824239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:12.952 [2024-12-16 10:49:12.824245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:12.952 [2024-12-16 10:49:12.824252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:12.952 [2024-12-16 10:49:12.824259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.952 [2024-12-16 10:49:12.824272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:12.952 [2024-12-16 10:49:12.824279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:17:12.952 [2024-12-16 10:49:12.824289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.825632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.952 [2024-12-16 10:49:12.825653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:12.952 [2024-12-16 10:49:12.825662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:17:12.952 [2024-12-16 10:49:12.825670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.825749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.952 [2024-12-16 10:49:12.825759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:12.952 [2024-12-16 10:49:12.825767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:12.952 [2024-12-16 10:49:12.825775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.831001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.952 [2024-12-16 10:49:12.831109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.952 [2024-12-16 10:49:12.831157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.952 [2024-12-16 10:49:12.831181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.831256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.952 [2024-12-16 10:49:12.831281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.952 [2024-12-16 10:49:12.831301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.952 [2024-12-16 10:49:12.831322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.831417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.952 [2024-12-16 10:49:12.831445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.952 [2024-12-16 10:49:12.831467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.952 [2024-12-16 10:49:12.831528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.831563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.952 [2024-12-16 10:49:12.831675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.952 [2024-12-16 10:49:12.831709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.952 [2024-12-16 10:49:12.831730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.840366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.952 [2024-12-16 10:49:12.840491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.952 [2024-12-16 10:49:12.840541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.952 [2024-12-16 10:49:12.840565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.847165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.952 [2024-12-16 10:49:12.847273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.952 [2024-12-16 10:49:12.847318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.952 [2024-12-16 10:49:12.847344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.847410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.952 [2024-12-16 10:49:12.847442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.952 [2024-12-16 10:49:12.847462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.952 [2024-12-16 10:49:12.847485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.952 [2024-12-16 10:49:12.847529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.952 [2024-12-16 10:49:12.847552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.953 [2024-12-16 10:49:12.847571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.953 [2024-12-16 10:49:12.847630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.953 [2024-12-16 10:49:12.847718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.953 [2024-12-16 10:49:12.847786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.953 [2024-12-16 10:49:12.847808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.953 [2024-12-16 10:49:12.847861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.953 [2024-12-16 10:49:12.847915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.953 [2024-12-16 10:49:12.848079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.953 [2024-12-16 10:49:12.848136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.953 [2024-12-16 10:49:12.848164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.953 [2024-12-16 10:49:12.848213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.953 [2024-12-16 10:49:12.848659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.953 [2024-12-16 10:49:12.848707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.953 [2024-12-16 10:49:12.848732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.953 [2024-12-16 10:49:12.848848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.953 [2024-12-16 10:49:12.848880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.953 [2024-12-16 10:49:12.848900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.953 [2024-12-16 10:49:12.848920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.953 [2024-12-16 10:49:12.849085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.747 ms, result 0 00:17:13.212 10:49:13 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:13.212 [2024-12-16 10:49:13.092727] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:13.212 [2024-12-16 10:49:13.093012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85586 ] 00:17:13.470 [2024-12-16 10:49:13.229110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.470 [2024-12-16 10:49:13.259669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.470 [2024-12-16 10:49:13.343238] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:13.470 [2024-12-16 10:49:13.343299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:13.731 [2024-12-16 10:49:13.495187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.495231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:13.731 [2024-12-16 10:49:13.495244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:13.731 [2024-12-16 10:49:13.495252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.497516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.497553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.731 [2024-12-16 10:49:13.497568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.248 ms 00:17:13.731 [2024-12-16 10:49:13.497578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.497640] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:13.731 [2024-12-16 10:49:13.497855] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:13.731 [2024-12-16 10:49:13.497869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.497880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.731 [2024-12-16 10:49:13.497891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:17:13.731 [2024-12-16 10:49:13.497898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.498947] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:13.731 [2024-12-16 10:49:13.500915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.500955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:13.731 [2024-12-16 10:49:13.500968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:17:13.731 [2024-12-16 10:49:13.500976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.501028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.501038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:13.731 [2024-12-16 10:49:13.501046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:13.731 [2024-12-16 10:49:13.501053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.505477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.505504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:13.731 [2024-12-16 10:49:13.505517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.387 ms 00:17:13.731 [2024-12-16 10:49:13.505524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.505624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.505636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:13.731 [2024-12-16 10:49:13.505644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:13.731 [2024-12-16 10:49:13.505655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.505678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.505686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:13.731 [2024-12-16 10:49:13.505697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:13.731 [2024-12-16 10:49:13.505704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.505722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:13.731 [2024-12-16 10:49:13.506968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.506992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:13.731 [2024-12-16 10:49:13.507001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:17:13.731 [2024-12-16 10:49:13.507008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.507054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.507066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:13.731 [2024-12-16 10:49:13.507074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:13.731 [2024-12-16 10:49:13.507082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.507098] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:13.731 [2024-12-16 10:49:13.507114] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:13.731 [2024-12-16 10:49:13.507156] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:13.731 [2024-12-16 10:49:13.507171] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:13.731 [2024-12-16 10:49:13.507274] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:13.731 [2024-12-16 10:49:13.507284] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:13.731 [2024-12-16 10:49:13.507294] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:13.731 [2024-12-16 10:49:13.507304] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507315] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507323] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:13.731 [2024-12-16 10:49:13.507330] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:13.731 [2024-12-16 10:49:13.507337] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:13.731 [2024-12-16 10:49:13.507347] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:13.731 [2024-12-16 10:49:13.507354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.507361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:13.731 [2024-12-16 10:49:13.507375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:17:13.731 [2024-12-16 10:49:13.507382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.507471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.731 [2024-12-16 10:49:13.507479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:13.731 [2024-12-16 10:49:13.507486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:13.731 [2024-12-16 10:49:13.507493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.731 [2024-12-16 10:49:13.507589] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:13.731 [2024-12-16 10:49:13.507602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:13.731 [2024-12-16 10:49:13.507616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:13.731 [2024-12-16 10:49:13.507643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:13.731 [2024-12-16 10:49:13.507667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.731 [2024-12-16 10:49:13.507683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:13.731 [2024-12-16 10:49:13.507691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:13.731 [2024-12-16 10:49:13.507698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.731 [2024-12-16 10:49:13.507706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:13.731 [2024-12-16 10:49:13.507714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:13.731 [2024-12-16 10:49:13.507721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:13.731 [2024-12-16 10:49:13.507736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:13.731 [2024-12-16 10:49:13.507756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:13.731 [2024-12-16 10:49:13.507775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:13.731 [2024-12-16 10:49:13.507798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:13.731 [2024-12-16 10:49:13.507820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.731 [2024-12-16 10:49:13.507832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:13.731 [2024-12-16 10:49:13.507839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:13.731 [2024-12-16 10:49:13.507845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.731 [2024-12-16 10:49:13.507851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:13.731 [2024-12-16 10:49:13.507858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:13.732 [2024-12-16 10:49:13.507864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.732 [2024-12-16 10:49:13.507871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:13.732 [2024-12-16 10:49:13.507878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:13.732 [2024-12-16 10:49:13.507884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.732 [2024-12-16 10:49:13.507891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:13.732 [2024-12-16 10:49:13.507899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:13.732 [2024-12-16 10:49:13.507906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.732 [2024-12-16 10:49:13.507912] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:13.732 [2024-12-16 10:49:13.507919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:13.732 [2024-12-16 10:49:13.507962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.732 [2024-12-16 10:49:13.507971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.732 [2024-12-16 10:49:13.507982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:13.732 [2024-12-16 10:49:13.507988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:13.732 [2024-12-16 10:49:13.507994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:13.732 [2024-12-16 10:49:13.508001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:13.732 [2024-12-16 10:49:13.508007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:13.732 [2024-12-16 10:49:13.508014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:13.732 [2024-12-16 10:49:13.508023] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:13.732 [2024-12-16 10:49:13.508032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.732 [2024-12-16 10:49:13.508040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:13.732 [2024-12-16 10:49:13.508047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:13.732 [2024-12-16 10:49:13.508056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:13.732 [2024-12-16 10:49:13.508063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:13.732 [2024-12-16 10:49:13.508071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:13.732 [2024-12-16 10:49:13.508078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:13.732 [2024-12-16 10:49:13.508085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:13.732 [2024-12-16 10:49:13.508097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:13.732 [2024-12-16 10:49:13.508104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:13.732 [2024-12-16 10:49:13.508111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:13.732 [2024-12-16 10:49:13.508118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:13.732 [2024-12-16 10:49:13.508125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:13.732 [2024-12-16 10:49:13.508132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:13.732 [2024-12-16 10:49:13.508140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:13.732 [2024-12-16 10:49:13.508147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:13.732 [2024-12-16 10:49:13.508155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.732 [2024-12-16 10:49:13.508162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:13.732 [2024-12-16 10:49:13.508169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:13.732 [2024-12-16 10:49:13.508178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:13.732 [2024-12-16 10:49:13.508185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:13.732 [2024-12-16 10:49:13.508192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.508199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:13.732 [2024-12-16 10:49:13.508208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:17:13.732 [2024-12-16 10:49:13.508215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.525846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.525890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:13.732 [2024-12-16 10:49:13.525910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.581 ms 00:17:13.732 [2024-12-16 10:49:13.525924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.526100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.526114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:13.732 [2024-12-16 10:49:13.526125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:13.732 [2024-12-16 10:49:13.526137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.534166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.534290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:13.732 [2024-12-16 10:49:13.534305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.005 ms 00:17:13.732 [2024-12-16 10:49:13.534313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.534357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.534365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:13.732 [2024-12-16 10:49:13.534383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:13.732 [2024-12-16 10:49:13.534391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.534677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.534693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:13.732 [2024-12-16 10:49:13.534701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:13.732 [2024-12-16 10:49:13.534711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.534831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.534842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:13.732 [2024-12-16 10:49:13.534850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:13.732 [2024-12-16 10:49:13.534860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.539333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.539366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:13.732 [2024-12-16 10:49:13.539374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.454 ms 00:17:13.732 [2024-12-16 10:49:13.539382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.541461] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:13.732 [2024-12-16 10:49:13.541499] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:13.732 [2024-12-16 10:49:13.541513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.541520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:13.732 [2024-12-16 10:49:13.541529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:17:13.732 [2024-12-16 10:49:13.541536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.555940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.555969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:13.732 [2024-12-16 10:49:13.555984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.361 ms 00:17:13.732 [2024-12-16 10:49:13.555991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.557554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.557662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:13.732 [2024-12-16 10:49:13.557676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:17:13.732 [2024-12-16 10:49:13.557683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.558990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.559014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:13.732 [2024-12-16 10:49:13.559023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:17:13.732 [2024-12-16 10:49:13.559029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.559337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.559355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:13.732 [2024-12-16 10:49:13.559364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:13.732 [2024-12-16 10:49:13.559374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.573691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.573727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:13.732 [2024-12-16 10:49:13.573738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.297 ms 00:17:13.732 [2024-12-16 10:49:13.573745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.580990] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:13.732 [2024-12-16 10:49:13.594210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.594242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:13.732 [2024-12-16 10:49:13.594253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.413 ms 00:17:13.732 [2024-12-16 10:49:13.594261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.732 [2024-12-16 10:49:13.594345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.732 [2024-12-16 10:49:13.594355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:13.732 [2024-12-16 10:49:13.594364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:13.733 [2024-12-16 10:49:13.594377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.733 [2024-12-16 10:49:13.594425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.733 [2024-12-16 10:49:13.594434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:13.733 [2024-12-16 10:49:13.594442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:13.733 [2024-12-16 10:49:13.594449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.733 [2024-12-16 10:49:13.594468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.733 [2024-12-16 10:49:13.594476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:13.733 [2024-12-16 10:49:13.594484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:13.733 [2024-12-16 10:49:13.594495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.733 [2024-12-16 10:49:13.594524] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:13.733 [2024-12-16 10:49:13.594534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.733 [2024-12-16 10:49:13.594545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:13.733 [2024-12-16 10:49:13.594552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:13.733 [2024-12-16 10:49:13.594560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.733 [2024-12-16 10:49:13.597770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.733 [2024-12-16 10:49:13.597801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:13.733 [2024-12-16 10:49:13.597811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.192 ms 00:17:13.733 [2024-12-16 10:49:13.597819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.733 [2024-12-16 10:49:13.597890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.733 [2024-12-16 10:49:13.597902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:13.733 [2024-12-16 10:49:13.597911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:13.733 [2024-12-16 10:49:13.597918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.733 [2024-12-16 10:49:13.598708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:13.733 [2024-12-16 10:49:13.599674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.272 ms, result 0 00:17:13.733 [2024-12-16 10:49:13.600249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:13.733 [2024-12-16 10:49:13.610718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:15.108  [2024-12-16T10:49:16.028Z] Copying: 44/256 [MB] (44 MBps) [2024-12-16T10:49:16.961Z] Copying: 89/256 [MB] (45 MBps) [2024-12-16T10:49:17.906Z] Copying: 132/256 [MB] (42 MBps) [2024-12-16T10:49:18.849Z] Copying: 160/256 [MB] (27 MBps) [2024-12-16T10:49:19.789Z] Copying: 183/256 [MB] (23 MBps) [2024-12-16T10:49:20.722Z] Copying: 204/256 [MB] (21 MBps) [2024-12-16T10:49:20.980Z] Copying: 246/256 [MB] (42 MBps) [2024-12-16T10:49:21.240Z] Copying: 256/256 [MB] (average 35 MBps)[2024-12-16 10:49:21.023763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.251 [2024-12-16 10:49:21.025151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.025278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:21.251 [2024-12-16 10:49:21.025388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.251 [2024-12-16 10:49:21.025442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.025561] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:21.251 [2024-12-16 10:49:21.026178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.026337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:21.251 [2024-12-16 10:49:21.026462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:17:21.251 [2024-12-16 10:49:21.026507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.027080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.027209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:21.251 [2024-12-16 10:49:21.027337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:17:21.251 [2024-12-16 10:49:21.027381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.036292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.036467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:21.251 [2024-12-16 10:49:21.036493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.786 ms 00:17:21.251 [2024-12-16 10:49:21.036507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.043793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.043891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:21.251 [2024-12-16 10:49:21.043907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.223 ms 00:17:21.251 [2024-12-16 10:49:21.043921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.045256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.045284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:21.251 [2024-12-16 10:49:21.045293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:17:21.251 [2024-12-16 10:49:21.045300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.048364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.048398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:21.251 [2024-12-16 10:49:21.048412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.044 ms 00:17:21.251 [2024-12-16 10:49:21.048424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.048531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.048539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:21.251 [2024-12-16 10:49:21.048548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:21.251 [2024-12-16 10:49:21.048555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.050202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.050311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:21.251 [2024-12-16 10:49:21.050325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:17:21.251 [2024-12-16 10:49:21.050332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.051301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.051330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:21.251 [2024-12-16 10:49:21.051339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:17:21.251 [2024-12-16 10:49:21.051346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.052103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.052131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:21.251 [2024-12-16 10:49:21.052139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:21.251 [2024-12-16 10:49:21.052146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.053213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.251 [2024-12-16 10:49:21.053242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:21.251 [2024-12-16 10:49:21.053251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:17:21.251 [2024-12-16 10:49:21.053257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.251 [2024-12-16 10:49:21.053275] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:21.251 [2024-12-16 10:49:21.053292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:21.251 [2024-12-16 10:49:21.053369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.053996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.054003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:21.252 [2024-12-16 10:49:21.054010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:21.253 [2024-12-16 10:49:21.054017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:21.253 [2024-12-16 10:49:21.054024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:21.253 [2024-12-16 10:49:21.054033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:21.253 [2024-12-16 10:49:21.054041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:21.253 [2024-12-16 10:49:21.054048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:21.253 [2024-12-16 10:49:21.054056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:21.253 [2024-12-16 10:49:21.054071] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:21.253 [2024-12-16 10:49:21.054084] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37bf0612-866e-4ad5-8f9f-97f4cdf0ff4f 00:17:21.253 [2024-12-16 10:49:21.054092] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:21.253 [2024-12-16 10:49:21.054099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:21.253 [2024-12-16 10:49:21.054106] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:21.253 [2024-12-16 10:49:21.054113] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:21.253 [2024-12-16 10:49:21.054120] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:21.253 [2024-12-16 10:49:21.054127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:21.253 [2024-12-16 10:49:21.054134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:21.253 [2024-12-16 10:49:21.054141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:21.253 [2024-12-16 10:49:21.054147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:21.253 [2024-12-16 10:49:21.054153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.253 [2024-12-16 10:49:21.054161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:21.253 [2024-12-16 10:49:21.054170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:17:21.253 [2024-12-16 10:49:21.054177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.055509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.253 [2024-12-16 10:49:21.055530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:21.253 [2024-12-16 10:49:21.055539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:17:21.253 [2024-12-16 10:49:21.055546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.055623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.253 [2024-12-16 10:49:21.055635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:21.253 [2024-12-16 10:49:21.055644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:21.253 [2024-12-16 10:49:21.055651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.060226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.060335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.253 [2024-12-16 10:49:21.060349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.060358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.060422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.060436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.253 [2024-12-16 10:49:21.060444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.060451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.060491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.060499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.253 [2024-12-16 10:49:21.060508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.060518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.060535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.060546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.253 [2024-12-16 10:49:21.060556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.060563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.068857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.068893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.253 [2024-12-16 10:49:21.068905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.068914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.075585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.075733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.253 [2024-12-16 10:49:21.075748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.075761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.075802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.075811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.253 [2024-12-16 10:49:21.075819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.075826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.075855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.075862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.253 [2024-12-16 10:49:21.075873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.075883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.075966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.075976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.253 [2024-12-16 10:49:21.075984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.075991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.076019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.076031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:21.253 [2024-12-16 10:49:21.076039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.076047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.076092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.076103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.253 [2024-12-16 10:49:21.076110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.076117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.076155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.253 [2024-12-16 10:49:21.076164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.253 [2024-12-16 10:49:21.076172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.253 [2024-12-16 10:49:21.076181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.253 [2024-12-16 10:49:21.076304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.134 ms, result 0 00:17:21.253 00:17:21.253 00:17:21.512 10:49:21 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:22.080 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:22.080 10:49:21 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:22.080 10:49:21 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:22.080 10:49:21 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:22.080 10:49:21 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.080 10:49:21 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:22.080 10:49:21 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:22.080 Process with pid 85550 is not found 00:17:22.080 10:49:21 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85550 00:17:22.080 10:49:21 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85550 ']' 00:17:22.080 10:49:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85550 00:17:22.080 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85550) - No such process 00:17:22.080 10:49:21 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85550 is not found' 00:17:22.080 ************************************ 00:17:22.080 END TEST ftl_trim 00:17:22.080 ************************************ 00:17:22.080 00:17:22.080 real 0m59.406s 00:17:22.080 user 1m19.138s 00:17:22.080 sys 0m4.858s 00:17:22.080 10:49:21 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.080 10:49:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:22.080 10:49:21 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:22.080 10:49:21 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:22.080 10:49:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.080 10:49:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:22.080 ************************************ 00:17:22.080 START TEST ftl_restore 00:17:22.080 ************************************ 00:17:22.080 10:49:21 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:22.080 * Looking for test storage... 00:17:22.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.080 10:49:21 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:22.080 10:49:21 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:22.080 10:49:21 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:22.080 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.080 10:49:22 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:22.080 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.080 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:22.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.080 --rc genhtml_branch_coverage=1 00:17:22.080 --rc genhtml_function_coverage=1 00:17:22.080 --rc genhtml_legend=1 00:17:22.080 --rc geninfo_all_blocks=1 00:17:22.080 --rc geninfo_unexecuted_blocks=1 00:17:22.080 00:17:22.080 ' 00:17:22.080 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:22.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.080 --rc genhtml_branch_coverage=1 00:17:22.080 --rc genhtml_function_coverage=1 00:17:22.080 --rc genhtml_legend=1 00:17:22.080 --rc geninfo_all_blocks=1 00:17:22.080 --rc geninfo_unexecuted_blocks=1 00:17:22.080 00:17:22.080 ' 00:17:22.080 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:22.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.080 --rc genhtml_branch_coverage=1 00:17:22.080 --rc genhtml_function_coverage=1 00:17:22.080 --rc genhtml_legend=1 00:17:22.080 --rc geninfo_all_blocks=1 00:17:22.080 --rc geninfo_unexecuted_blocks=1 00:17:22.080 00:17:22.080 ' 00:17:22.080 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:22.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.080 --rc genhtml_branch_coverage=1 00:17:22.080 --rc genhtml_function_coverage=1 00:17:22.080 --rc genhtml_legend=1 00:17:22.080 --rc geninfo_all_blocks=1 00:17:22.080 --rc geninfo_unexecuted_blocks=1 00:17:22.080 00:17:22.080 ' 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:22.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NAgobhKbfR 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:22.080 10:49:22 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:22.081 10:49:22 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:22.081 10:49:22 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:22.081 10:49:22 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=85754 00:17:22.081 10:49:22 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 85754 00:17:22.081 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 85754 ']' 00:17:22.081 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.081 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.081 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.081 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.081 10:49:22 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.081 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:22.340 [2024-12-16 10:49:22.124876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:22.340 [2024-12-16 10:49:22.125007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85754 ] 00:17:22.340 [2024-12-16 10:49:22.259039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.340 [2024-12-16 10:49:22.290692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.303 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.303 10:49:22 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:23.303 10:49:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:23.303 10:49:22 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:23.303 10:49:22 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:23.303 10:49:22 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:23.303 10:49:22 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:23.303 10:49:22 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:23.303 10:49:23 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:23.303 10:49:23 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:23.303 10:49:23 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:23.303 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:23.303 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:23.303 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:23.303 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:23.303 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:23.562 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:23.562 { 00:17:23.562 "name": "nvme0n1", 00:17:23.562 "aliases": [ 00:17:23.562 "ebd2021a-2877-42c2-ae5b-5a07a18d664e" 00:17:23.562 ], 00:17:23.562 "product_name": "NVMe disk", 00:17:23.562 "block_size": 4096, 00:17:23.562 "num_blocks": 1310720, 00:17:23.562 "uuid": "ebd2021a-2877-42c2-ae5b-5a07a18d664e", 00:17:23.562 "numa_id": -1, 00:17:23.562 "assigned_rate_limits": { 00:17:23.562 "rw_ios_per_sec": 0, 00:17:23.562 "rw_mbytes_per_sec": 0, 00:17:23.562 "r_mbytes_per_sec": 0, 00:17:23.562 "w_mbytes_per_sec": 0 00:17:23.562 }, 00:17:23.562 "claimed": true, 00:17:23.562 "claim_type": "read_many_write_one", 00:17:23.562 "zoned": false, 00:17:23.562 "supported_io_types": { 00:17:23.562 "read": true, 00:17:23.562 "write": true, 00:17:23.562 "unmap": true, 00:17:23.562 "flush": true, 00:17:23.562 "reset": true, 00:17:23.562 "nvme_admin": true, 00:17:23.562 "nvme_io": true, 00:17:23.562 "nvme_io_md": false, 00:17:23.562 "write_zeroes": true, 00:17:23.562 "zcopy": false, 00:17:23.562 "get_zone_info": false, 00:17:23.562 "zone_management": false, 00:17:23.562 "zone_append": false, 00:17:23.562 "compare": true, 00:17:23.562 "compare_and_write": false, 00:17:23.562 "abort": true, 00:17:23.562 "seek_hole": false, 00:17:23.562 "seek_data": false, 00:17:23.562 "copy": true, 00:17:23.562 "nvme_iov_md": false 00:17:23.562 }, 00:17:23.562 "driver_specific": { 00:17:23.562 "nvme": [ 00:17:23.562 { 00:17:23.562 "pci_address": "0000:00:11.0", 00:17:23.562 "trid": { 00:17:23.562 "trtype": "PCIe", 00:17:23.562 "traddr": "0000:00:11.0" 00:17:23.562 }, 00:17:23.562 "ctrlr_data": { 00:17:23.562 "cntlid": 0, 00:17:23.562 "vendor_id": "0x1b36", 00:17:23.562 "model_number": "QEMU NVMe Ctrl", 00:17:23.562 "serial_number": "12341", 00:17:23.562 "firmware_revision": "8.0.0", 00:17:23.562 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:23.562 "oacs": { 00:17:23.562 "security": 0, 00:17:23.562 "format": 1, 00:17:23.562 "firmware": 0, 00:17:23.562 "ns_manage": 1 00:17:23.562 }, 00:17:23.562 "multi_ctrlr": false, 00:17:23.562 "ana_reporting": false 00:17:23.562 }, 00:17:23.562 "vs": { 00:17:23.562 "nvme_version": "1.4" 00:17:23.562 }, 00:17:23.562 "ns_data": { 00:17:23.562 "id": 1, 00:17:23.562 "can_share": false 00:17:23.562 } 00:17:23.562 } 00:17:23.562 ], 00:17:23.562 "mp_policy": "active_passive" 00:17:23.562 } 00:17:23.562 } 00:17:23.562 ]' 00:17:23.562 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:23.562 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:23.562 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:23.562 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:23.562 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:23.562 10:49:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:23.562 10:49:23 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:23.562 10:49:23 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:23.562 10:49:23 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:23.562 10:49:23 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:23.562 10:49:23 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:23.821 10:49:23 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=8fd616ef-888e-4e77-9e48-185778e9bce5 00:17:23.821 10:49:23 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:23.821 10:49:23 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fd616ef-888e-4e77-9e48-185778e9bce5 00:17:24.079 10:49:23 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:24.338 10:49:24 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=29deaba0-0089-4f69-85b6-c4b4929aa1dc 00:17:24.338 10:49:24 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 29deaba0-0089-4f69-85b6-c4b4929aa1dc 00:17:24.597 10:49:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:24.597 10:49:24 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:24.597 10:49:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:24.597 10:49:24 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:24.597 10:49:24 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:24.597 10:49:24 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:24.597 10:49:24 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:24.597 10:49:24 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:24.597 { 00:17:24.597 "name": "ff6beade-9423-4f8a-9a0d-1f4a4178e753", 00:17:24.597 "aliases": [ 00:17:24.597 "lvs/nvme0n1p0" 00:17:24.597 ], 00:17:24.597 "product_name": "Logical Volume", 00:17:24.597 "block_size": 4096, 00:17:24.597 "num_blocks": 26476544, 00:17:24.597 "uuid": "ff6beade-9423-4f8a-9a0d-1f4a4178e753", 00:17:24.597 "assigned_rate_limits": { 00:17:24.597 "rw_ios_per_sec": 0, 00:17:24.597 "rw_mbytes_per_sec": 0, 00:17:24.597 "r_mbytes_per_sec": 0, 00:17:24.597 "w_mbytes_per_sec": 0 00:17:24.597 }, 00:17:24.597 "claimed": false, 00:17:24.597 "zoned": false, 00:17:24.597 "supported_io_types": { 00:17:24.597 "read": true, 00:17:24.597 "write": true, 00:17:24.597 "unmap": true, 00:17:24.597 "flush": false, 00:17:24.597 "reset": true, 00:17:24.597 "nvme_admin": false, 00:17:24.597 "nvme_io": false, 00:17:24.597 "nvme_io_md": false, 00:17:24.597 "write_zeroes": true, 00:17:24.597 "zcopy": false, 00:17:24.597 "get_zone_info": false, 00:17:24.597 "zone_management": false, 00:17:24.597 "zone_append": false, 00:17:24.597 "compare": false, 00:17:24.597 "compare_and_write": false, 00:17:24.597 "abort": false, 00:17:24.597 "seek_hole": true, 00:17:24.597 "seek_data": true, 00:17:24.597 "copy": false, 00:17:24.597 "nvme_iov_md": false 00:17:24.597 }, 00:17:24.597 "driver_specific": { 00:17:24.597 "lvol": { 00:17:24.597 "lvol_store_uuid": "29deaba0-0089-4f69-85b6-c4b4929aa1dc", 00:17:24.597 "base_bdev": "nvme0n1", 00:17:24.597 "thin_provision": true, 00:17:24.597 "num_allocated_clusters": 0, 00:17:24.597 "snapshot": false, 00:17:24.597 "clone": false, 00:17:24.597 "esnap_clone": false 00:17:24.597 } 00:17:24.597 } 00:17:24.597 } 00:17:24.597 ]' 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:24.597 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:24.855 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:24.856 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:24.856 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:24.856 10:49:24 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:24.856 10:49:24 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:24.856 10:49:24 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:25.114 10:49:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:25.114 10:49:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:25.114 10:49:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:25.114 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:25.114 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:25.114 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:25.114 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:25.114 10:49:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:25.114 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:25.114 { 00:17:25.114 "name": "ff6beade-9423-4f8a-9a0d-1f4a4178e753", 00:17:25.114 "aliases": [ 00:17:25.114 "lvs/nvme0n1p0" 00:17:25.114 ], 00:17:25.114 "product_name": "Logical Volume", 00:17:25.114 "block_size": 4096, 00:17:25.114 "num_blocks": 26476544, 00:17:25.114 "uuid": "ff6beade-9423-4f8a-9a0d-1f4a4178e753", 00:17:25.114 "assigned_rate_limits": { 00:17:25.114 "rw_ios_per_sec": 0, 00:17:25.114 "rw_mbytes_per_sec": 0, 00:17:25.114 "r_mbytes_per_sec": 0, 00:17:25.114 "w_mbytes_per_sec": 0 00:17:25.114 }, 00:17:25.114 "claimed": false, 00:17:25.114 "zoned": false, 00:17:25.114 "supported_io_types": { 00:17:25.114 "read": true, 00:17:25.114 "write": true, 00:17:25.114 "unmap": true, 00:17:25.114 "flush": false, 00:17:25.114 "reset": true, 00:17:25.114 "nvme_admin": false, 00:17:25.114 "nvme_io": false, 00:17:25.114 "nvme_io_md": false, 00:17:25.114 "write_zeroes": true, 00:17:25.114 "zcopy": false, 00:17:25.114 "get_zone_info": false, 00:17:25.114 "zone_management": false, 00:17:25.114 "zone_append": false, 00:17:25.114 "compare": false, 00:17:25.114 "compare_and_write": false, 00:17:25.114 "abort": false, 00:17:25.114 "seek_hole": true, 00:17:25.114 "seek_data": true, 00:17:25.114 "copy": false, 00:17:25.114 "nvme_iov_md": false 00:17:25.114 }, 00:17:25.114 "driver_specific": { 00:17:25.114 "lvol": { 00:17:25.114 "lvol_store_uuid": "29deaba0-0089-4f69-85b6-c4b4929aa1dc", 00:17:25.114 "base_bdev": "nvme0n1", 00:17:25.114 "thin_provision": true, 00:17:25.114 "num_allocated_clusters": 0, 00:17:25.114 "snapshot": false, 00:17:25.114 "clone": false, 00:17:25.114 "esnap_clone": false 00:17:25.114 } 00:17:25.114 } 00:17:25.114 } 00:17:25.114 ]' 00:17:25.114 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:25.114 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:25.114 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:25.371 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:25.371 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:25.371 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:25.371 10:49:25 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:25.372 10:49:25 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:25.372 10:49:25 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:25.372 10:49:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:25.372 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:25.372 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:25.372 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:25.372 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:25.372 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff6beade-9423-4f8a-9a0d-1f4a4178e753 00:17:25.631 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:25.631 { 00:17:25.631 "name": "ff6beade-9423-4f8a-9a0d-1f4a4178e753", 00:17:25.631 "aliases": [ 00:17:25.631 "lvs/nvme0n1p0" 00:17:25.631 ], 00:17:25.631 "product_name": "Logical Volume", 00:17:25.631 "block_size": 4096, 00:17:25.631 "num_blocks": 26476544, 00:17:25.631 "uuid": "ff6beade-9423-4f8a-9a0d-1f4a4178e753", 00:17:25.631 "assigned_rate_limits": { 00:17:25.631 "rw_ios_per_sec": 0, 00:17:25.631 "rw_mbytes_per_sec": 0, 00:17:25.631 "r_mbytes_per_sec": 0, 00:17:25.631 "w_mbytes_per_sec": 0 00:17:25.631 }, 00:17:25.631 "claimed": false, 00:17:25.631 "zoned": false, 00:17:25.631 "supported_io_types": { 00:17:25.631 "read": true, 00:17:25.631 "write": true, 00:17:25.631 "unmap": true, 00:17:25.631 "flush": false, 00:17:25.631 "reset": true, 00:17:25.631 "nvme_admin": false, 00:17:25.631 "nvme_io": false, 00:17:25.631 "nvme_io_md": false, 00:17:25.631 "write_zeroes": true, 00:17:25.631 "zcopy": false, 00:17:25.631 "get_zone_info": false, 00:17:25.631 "zone_management": false, 00:17:25.631 "zone_append": false, 00:17:25.631 "compare": false, 00:17:25.631 "compare_and_write": false, 00:17:25.631 "abort": false, 00:17:25.631 "seek_hole": true, 00:17:25.631 "seek_data": true, 00:17:25.631 "copy": false, 00:17:25.631 "nvme_iov_md": false 00:17:25.631 }, 00:17:25.631 "driver_specific": { 00:17:25.631 "lvol": { 00:17:25.631 "lvol_store_uuid": "29deaba0-0089-4f69-85b6-c4b4929aa1dc", 00:17:25.631 "base_bdev": "nvme0n1", 00:17:25.631 "thin_provision": true, 00:17:25.631 "num_allocated_clusters": 0, 00:17:25.631 "snapshot": false, 00:17:25.631 "clone": false, 00:17:25.631 "esnap_clone": false 00:17:25.631 } 00:17:25.631 } 00:17:25.631 } 00:17:25.631 ]' 00:17:25.631 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:25.631 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:25.631 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:25.631 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:25.631 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:25.631 10:49:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:25.631 10:49:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:25.631 10:49:25 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ff6beade-9423-4f8a-9a0d-1f4a4178e753 --l2p_dram_limit 10' 00:17:25.631 10:49:25 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:25.631 10:49:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:25.631 10:49:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:25.631 10:49:25 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:25.631 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:25.631 10:49:25 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ff6beade-9423-4f8a-9a0d-1f4a4178e753 --l2p_dram_limit 10 -c nvc0n1p0 00:17:25.893 [2024-12-16 10:49:25.763984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.893 [2024-12-16 10:49:25.764146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.893 [2024-12-16 10:49:25.764169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:25.893 [2024-12-16 10:49:25.764182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.893 [2024-12-16 10:49:25.764250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.893 [2024-12-16 10:49:25.764261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.893 [2024-12-16 10:49:25.764273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:25.893 [2024-12-16 10:49:25.764284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.893 [2024-12-16 10:49:25.764307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.893 [2024-12-16 10:49:25.764588] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.893 [2024-12-16 10:49:25.764606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.893 [2024-12-16 10:49:25.764616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.893 [2024-12-16 10:49:25.764629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:17:25.893 [2024-12-16 10:49:25.764641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.893 [2024-12-16 10:49:25.764703] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 642f8cb2-110e-4224-a571-58eb4e48d140 00:17:25.893 [2024-12-16 10:49:25.765806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.893 [2024-12-16 10:49:25.765839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:25.893 [2024-12-16 10:49:25.765855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:25.893 [2024-12-16 10:49:25.765863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.893 [2024-12-16 10:49:25.771124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.893 [2024-12-16 10:49:25.771233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.893 [2024-12-16 10:49:25.771251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.214 ms 00:17:25.893 [2024-12-16 10:49:25.771258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.893 [2024-12-16 10:49:25.771336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.893 [2024-12-16 10:49:25.771344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.893 [2024-12-16 10:49:25.771354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:25.893 [2024-12-16 10:49:25.771364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.893 [2024-12-16 10:49:25.771418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.893 [2024-12-16 10:49:25.771428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.893 [2024-12-16 10:49:25.771437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:25.893 [2024-12-16 10:49:25.771444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.893 [2024-12-16 10:49:25.771469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.893 [2024-12-16 10:49:25.772951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.894 [2024-12-16 10:49:25.772979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.894 [2024-12-16 10:49:25.772991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:17:25.894 [2024-12-16 10:49:25.772999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.894 [2024-12-16 10:49:25.773034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.894 [2024-12-16 10:49:25.773047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.894 [2024-12-16 10:49:25.773055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:25.894 [2024-12-16 10:49:25.773065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.894 [2024-12-16 10:49:25.773081] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:25.894 [2024-12-16 10:49:25.773226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:25.894 [2024-12-16 10:49:25.773237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.894 [2024-12-16 10:49:25.773251] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:25.894 [2024-12-16 10:49:25.773261] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773271] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773278] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:25.894 [2024-12-16 10:49:25.773289] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.894 [2024-12-16 10:49:25.773296] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:25.894 [2024-12-16 10:49:25.773308] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:25.894 [2024-12-16 10:49:25.773318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.894 [2024-12-16 10:49:25.773326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.894 [2024-12-16 10:49:25.773334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:17:25.894 [2024-12-16 10:49:25.773342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.894 [2024-12-16 10:49:25.773424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.894 [2024-12-16 10:49:25.773439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.894 [2024-12-16 10:49:25.773447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:25.894 [2024-12-16 10:49:25.773455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.894 [2024-12-16 10:49:25.773547] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.894 [2024-12-16 10:49:25.773559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.894 [2024-12-16 10:49:25.773567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.894 [2024-12-16 10:49:25.773593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.894 [2024-12-16 10:49:25.773618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.894 [2024-12-16 10:49:25.773634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.894 [2024-12-16 10:49:25.773644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:25.894 [2024-12-16 10:49:25.773652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.894 [2024-12-16 10:49:25.773663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.894 [2024-12-16 10:49:25.773670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:25.894 [2024-12-16 10:49:25.773679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.894 [2024-12-16 10:49:25.773695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.894 [2024-12-16 10:49:25.773720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.894 [2024-12-16 10:49:25.773746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.894 [2024-12-16 10:49:25.773771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.894 [2024-12-16 10:49:25.773802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.894 [2024-12-16 10:49:25.773826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.894 [2024-12-16 10:49:25.773844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.894 [2024-12-16 10:49:25.773853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:25.894 [2024-12-16 10:49:25.773860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.894 [2024-12-16 10:49:25.773869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:25.894 [2024-12-16 10:49:25.773877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:25.894 [2024-12-16 10:49:25.773886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:25.894 [2024-12-16 10:49:25.773902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:25.894 [2024-12-16 10:49:25.773909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773918] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.894 [2024-12-16 10:49:25.773942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.894 [2024-12-16 10:49:25.773953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.894 [2024-12-16 10:49:25.773961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.894 [2024-12-16 10:49:25.773972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.894 [2024-12-16 10:49:25.773979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.894 [2024-12-16 10:49:25.773987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.894 [2024-12-16 10:49:25.773994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.894 [2024-12-16 10:49:25.774002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.894 [2024-12-16 10:49:25.774009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.894 [2024-12-16 10:49:25.774022] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.894 [2024-12-16 10:49:25.774031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.894 [2024-12-16 10:49:25.774041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:25.894 [2024-12-16 10:49:25.774049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:25.894 [2024-12-16 10:49:25.774057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:25.894 [2024-12-16 10:49:25.774065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:25.894 [2024-12-16 10:49:25.774074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:25.894 [2024-12-16 10:49:25.774081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:25.894 [2024-12-16 10:49:25.774091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:25.894 [2024-12-16 10:49:25.774098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:25.894 [2024-12-16 10:49:25.774106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:25.894 [2024-12-16 10:49:25.774113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:25.894 [2024-12-16 10:49:25.774121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:25.894 [2024-12-16 10:49:25.774129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:25.894 [2024-12-16 10:49:25.774137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:25.894 [2024-12-16 10:49:25.774144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:25.894 [2024-12-16 10:49:25.774153] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.894 [2024-12-16 10:49:25.774163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.894 [2024-12-16 10:49:25.774172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.894 [2024-12-16 10:49:25.774179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.894 [2024-12-16 10:49:25.774187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.894 [2024-12-16 10:49:25.774194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.895 [2024-12-16 10:49:25.774203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.895 [2024-12-16 10:49:25.774210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.895 [2024-12-16 10:49:25.774222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:17:25.895 [2024-12-16 10:49:25.774229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.895 [2024-12-16 10:49:25.774270] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:25.895 [2024-12-16 10:49:25.774279] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:28.438 [2024-12-16 10:49:28.362786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.362849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:28.438 [2024-12-16 10:49:28.362870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2588.496 ms 00:17:28.438 [2024-12-16 10:49:28.362880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.438 [2024-12-16 10:49:28.371545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.371589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.438 [2024-12-16 10:49:28.371603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.562 ms 00:17:28.438 [2024-12-16 10:49:28.371611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.438 [2024-12-16 10:49:28.371717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.371726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.438 [2024-12-16 10:49:28.371742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:28.438 [2024-12-16 10:49:28.371750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.438 [2024-12-16 10:49:28.379984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.380017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.438 [2024-12-16 10:49:28.380029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.193 ms 00:17:28.438 [2024-12-16 10:49:28.380037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.438 [2024-12-16 10:49:28.380070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.380078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.438 [2024-12-16 10:49:28.380091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.438 [2024-12-16 10:49:28.380098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.438 [2024-12-16 10:49:28.380426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.380441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.438 [2024-12-16 10:49:28.380452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:28.438 [2024-12-16 10:49:28.380459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.438 [2024-12-16 10:49:28.380570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.380578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.438 [2024-12-16 10:49:28.380588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:28.438 [2024-12-16 10:49:28.380597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.438 [2024-12-16 10:49:28.397383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.397434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.438 [2024-12-16 10:49:28.397453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.758 ms 00:17:28.438 [2024-12-16 10:49:28.397464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.438 [2024-12-16 10:49:28.408947] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:28.438 [2024-12-16 10:49:28.411815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.438 [2024-12-16 10:49:28.411977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.438 [2024-12-16 10:49:28.411995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.236 ms 00:17:28.438 [2024-12-16 10:49:28.412005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.835 [2024-12-16 10:49:28.470053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.835 [2024-12-16 10:49:28.470112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:28.835 [2024-12-16 10:49:28.470125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.019 ms 00:17:28.835 [2024-12-16 10:49:28.470138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.835 [2024-12-16 10:49:28.470331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.835 [2024-12-16 10:49:28.470344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.835 [2024-12-16 10:49:28.470353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:17:28.835 [2024-12-16 10:49:28.470365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.835 [2024-12-16 10:49:28.474610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.835 [2024-12-16 10:49:28.474651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:28.835 [2024-12-16 10:49:28.474662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.226 ms 00:17:28.835 [2024-12-16 10:49:28.474671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.835 [2024-12-16 10:49:28.478621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.835 [2024-12-16 10:49:28.478757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:28.835 [2024-12-16 10:49:28.478773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.910 ms 00:17:28.835 [2024-12-16 10:49:28.478783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.835 [2024-12-16 10:49:28.479172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.835 [2024-12-16 10:49:28.479186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.835 [2024-12-16 10:49:28.479196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:28.835 [2024-12-16 10:49:28.479207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.835 [2024-12-16 10:49:28.511265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.835 [2024-12-16 10:49:28.511311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:28.835 [2024-12-16 10:49:28.511323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.038 ms 00:17:28.835 [2024-12-16 10:49:28.511334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.835 [2024-12-16 10:49:28.516524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.835 [2024-12-16 10:49:28.516566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:28.835 [2024-12-16 10:49:28.516577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.137 ms 00:17:28.836 [2024-12-16 10:49:28.516587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.836 [2024-12-16 10:49:28.520894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.836 [2024-12-16 10:49:28.521047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:28.836 [2024-12-16 10:49:28.521063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.270 ms 00:17:28.836 [2024-12-16 10:49:28.521072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.836 [2024-12-16 10:49:28.525968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.836 [2024-12-16 10:49:28.526005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.836 [2024-12-16 10:49:28.526015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.862 ms 00:17:28.836 [2024-12-16 10:49:28.526026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.836 [2024-12-16 10:49:28.526067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.836 [2024-12-16 10:49:28.526079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.836 [2024-12-16 10:49:28.526088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:28.836 [2024-12-16 10:49:28.526102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.836 [2024-12-16 10:49:28.526167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.836 [2024-12-16 10:49:28.526179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.836 [2024-12-16 10:49:28.526187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:28.836 [2024-12-16 10:49:28.526197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.836 [2024-12-16 10:49:28.527489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2763.074 ms, result 0 00:17:28.836 { 00:17:28.836 "name": "ftl0", 00:17:28.836 "uuid": "642f8cb2-110e-4224-a571-58eb4e48d140" 00:17:28.836 } 00:17:28.836 10:49:28 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:28.836 10:49:28 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:28.836 10:49:28 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:29.107 10:49:28 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:29.107 [2024-12-16 10:49:28.943371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.943417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.107 [2024-12-16 10:49:28.943431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.107 [2024-12-16 10:49:28.943439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.943463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.107 [2024-12-16 10:49:28.943959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.943981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.107 [2024-12-16 10:49:28.943990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:17:29.107 [2024-12-16 10:49:28.944006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.944260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.944273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.107 [2024-12-16 10:49:28.944281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:29.107 [2024-12-16 10:49:28.944291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.947526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.947550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.107 [2024-12-16 10:49:28.947560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:17:29.107 [2024-12-16 10:49:28.947569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.953800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.953828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:29.107 [2024-12-16 10:49:28.953838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:17:29.107 [2024-12-16 10:49:28.953847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.956098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.956135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.107 [2024-12-16 10:49:28.956144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:17:29.107 [2024-12-16 10:49:28.956153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.960808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.960850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.107 [2024-12-16 10:49:28.960860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:17:29.107 [2024-12-16 10:49:28.960869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.961011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.961024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.107 [2024-12-16 10:49:28.961033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:17:29.107 [2024-12-16 10:49:28.961042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.963750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.963785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:29.107 [2024-12-16 10:49:28.963794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:17:29.107 [2024-12-16 10:49:28.963803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.966108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.966143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:29.107 [2024-12-16 10:49:28.966152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:17:29.107 [2024-12-16 10:49:28.966160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.968052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.968172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:29.107 [2024-12-16 10:49:28.968186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 00:17:29.107 [2024-12-16 10:49:28.968195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.969755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.107 [2024-12-16 10:49:28.969791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:29.107 [2024-12-16 10:49:28.969806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:17:29.107 [2024-12-16 10:49:28.969817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.107 [2024-12-16 10:49:28.969848] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:29.107 [2024-12-16 10:49:28.969864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:29.107 [2024-12-16 10:49:28.969993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:29.108 [2024-12-16 10:49:28.970728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:29.108 [2024-12-16 10:49:28.970736] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 642f8cb2-110e-4224-a571-58eb4e48d140 00:17:29.109 [2024-12-16 10:49:28.970745] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:29.109 [2024-12-16 10:49:28.970752] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:29.109 [2024-12-16 10:49:28.970763] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:29.109 [2024-12-16 10:49:28.970770] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:29.109 [2024-12-16 10:49:28.970779] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:29.109 [2024-12-16 10:49:28.970786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:29.109 [2024-12-16 10:49:28.970795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:29.109 [2024-12-16 10:49:28.970801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:29.109 [2024-12-16 10:49:28.970808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:29.109 [2024-12-16 10:49:28.970815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.109 [2024-12-16 10:49:28.970824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:29.109 [2024-12-16 10:49:28.970834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:17:29.109 [2024-12-16 10:49:28.970843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:28.972552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.109 [2024-12-16 10:49:28.972604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:29.109 [2024-12-16 10:49:28.972626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:17:29.109 [2024-12-16 10:49:28.972647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:28.973148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.109 [2024-12-16 10:49:28.973340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:29.109 [2024-12-16 10:49:28.973453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:29.109 [2024-12-16 10:49:28.973507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:28.982269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:28.982489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.109 [2024-12-16 10:49:28.982716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:28.982784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:28.982923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:28.983154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.109 [2024-12-16 10:49:28.983204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:28.983247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:28.983415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:28.983478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.109 [2024-12-16 10:49:28.983638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:28.983689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:28.983752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:28.983812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.109 [2024-12-16 10:49:28.983854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:28.983899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:28.995668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:28.995808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.109 [2024-12-16 10:49:28.995859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:28.995885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:29.004701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:29.004841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.109 [2024-12-16 10:49:29.004891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:29.004919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:29.005092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:29.005123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.109 [2024-12-16 10:49:29.005185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:29.005210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:29.005265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:29.005289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.109 [2024-12-16 10:49:29.005309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:29.005361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:29.005457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:29.005484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.109 [2024-12-16 10:49:29.005503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:29.005524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:29.005567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:29.005749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:29.109 [2024-12-16 10:49:29.005763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:29.005775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:29.005818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:29.005832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.109 [2024-12-16 10:49:29.005840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:29.005850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:29.005894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.109 [2024-12-16 10:49:29.005906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.109 [2024-12-16 10:49:29.005914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.109 [2024-12-16 10:49:29.005944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.109 [2024-12-16 10:49:29.006093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.685 ms, result 0 00:17:29.109 true 00:17:29.109 10:49:29 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 85754 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 85754 ']' 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 85754 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85754 00:17:29.109 killing process with pid 85754 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85754' 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 85754 00:17:29.109 10:49:29 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 85754 00:17:34.387 10:49:33 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:38.593 262144+0 records in 00:17:38.593 262144+0 records out 00:17:38.593 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.88037 s, 277 MB/s 00:17:38.593 10:49:37 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:39.978 10:49:39 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.238 [2024-12-16 10:49:39.993842] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:40.238 [2024-12-16 10:49:39.994010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85967 ] 00:17:40.238 [2024-12-16 10:49:40.132581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.238 [2024-12-16 10:49:40.182876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.501 [2024-12-16 10:49:40.298241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.501 [2024-12-16 10:49:40.298321] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.501 [2024-12-16 10:49:40.459083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.459144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.501 [2024-12-16 10:49:40.459162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.501 [2024-12-16 10:49:40.459172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.459228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.459242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.501 [2024-12-16 10:49:40.459251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:40.501 [2024-12-16 10:49:40.459268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.459289] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.501 [2024-12-16 10:49:40.459566] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.501 [2024-12-16 10:49:40.459583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.459590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.501 [2024-12-16 10:49:40.459606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:17:40.501 [2024-12-16 10:49:40.459615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.461301] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.501 [2024-12-16 10:49:40.465358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.465406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.501 [2024-12-16 10:49:40.465418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.059 ms 00:17:40.501 [2024-12-16 10:49:40.465427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.465505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.465518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.501 [2024-12-16 10:49:40.465529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:40.501 [2024-12-16 10:49:40.465537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.473491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.473541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.501 [2024-12-16 10:49:40.473556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.904 ms 00:17:40.501 [2024-12-16 10:49:40.473564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.473668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.473681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.501 [2024-12-16 10:49:40.473693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:40.501 [2024-12-16 10:49:40.473702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.473763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.473774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.501 [2024-12-16 10:49:40.473786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:40.501 [2024-12-16 10:49:40.473794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.473820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.501 [2024-12-16 10:49:40.475870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.475910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.501 [2024-12-16 10:49:40.475920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:17:40.501 [2024-12-16 10:49:40.475954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.475997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.476009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.501 [2024-12-16 10:49:40.476019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:40.501 [2024-12-16 10:49:40.476027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.476048] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.501 [2024-12-16 10:49:40.476074] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.501 [2024-12-16 10:49:40.476115] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.501 [2024-12-16 10:49:40.476151] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:40.501 [2024-12-16 10:49:40.476258] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.501 [2024-12-16 10:49:40.476268] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.501 [2024-12-16 10:49:40.476279] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.501 [2024-12-16 10:49:40.476290] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.501 [2024-12-16 10:49:40.476303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.501 [2024-12-16 10:49:40.476315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:40.501 [2024-12-16 10:49:40.476322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.501 [2024-12-16 10:49:40.476333] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.501 [2024-12-16 10:49:40.476344] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.501 [2024-12-16 10:49:40.476352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.476360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.501 [2024-12-16 10:49:40.476367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:17:40.501 [2024-12-16 10:49:40.476375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.476460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.501 [2024-12-16 10:49:40.476477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.501 [2024-12-16 10:49:40.476488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:40.501 [2024-12-16 10:49:40.476496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.501 [2024-12-16 10:49:40.476594] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.501 [2024-12-16 10:49:40.476606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.501 [2024-12-16 10:49:40.476616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.501 [2024-12-16 10:49:40.476625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.501 [2024-12-16 10:49:40.476634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.501 [2024-12-16 10:49:40.476642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.501 [2024-12-16 10:49:40.476650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:40.501 [2024-12-16 10:49:40.476658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.501 [2024-12-16 10:49:40.476667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:40.501 [2024-12-16 10:49:40.476674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.501 [2024-12-16 10:49:40.476686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.501 [2024-12-16 10:49:40.476694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:40.501 [2024-12-16 10:49:40.476708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.501 [2024-12-16 10:49:40.476717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.501 [2024-12-16 10:49:40.476725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:40.501 [2024-12-16 10:49:40.476733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.501 [2024-12-16 10:49:40.476741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.502 [2024-12-16 10:49:40.476749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:40.502 [2024-12-16 10:49:40.476758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.502 [2024-12-16 10:49:40.476766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.502 [2024-12-16 10:49:40.476785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:40.502 [2024-12-16 10:49:40.476794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.502 [2024-12-16 10:49:40.476801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.502 [2024-12-16 10:49:40.476809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:40.502 [2024-12-16 10:49:40.476817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.502 [2024-12-16 10:49:40.476825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.502 [2024-12-16 10:49:40.476833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:40.502 [2024-12-16 10:49:40.476841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.502 [2024-12-16 10:49:40.476853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.502 [2024-12-16 10:49:40.476861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:40.502 [2024-12-16 10:49:40.476869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.502 [2024-12-16 10:49:40.476877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.502 [2024-12-16 10:49:40.476885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:40.502 [2024-12-16 10:49:40.476894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.502 [2024-12-16 10:49:40.476901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.502 [2024-12-16 10:49:40.476910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:40.502 [2024-12-16 10:49:40.476918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.502 [2024-12-16 10:49:40.476940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.502 [2024-12-16 10:49:40.476949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:40.502 [2024-12-16 10:49:40.476955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.502 [2024-12-16 10:49:40.476962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.502 [2024-12-16 10:49:40.476969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:40.502 [2024-12-16 10:49:40.476977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.502 [2024-12-16 10:49:40.476984] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.502 [2024-12-16 10:49:40.476995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.502 [2024-12-16 10:49:40.477002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.502 [2024-12-16 10:49:40.477012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.502 [2024-12-16 10:49:40.477020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.502 [2024-12-16 10:49:40.477028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.502 [2024-12-16 10:49:40.477035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.502 [2024-12-16 10:49:40.477042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.502 [2024-12-16 10:49:40.477048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.502 [2024-12-16 10:49:40.477056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.502 [2024-12-16 10:49:40.477064] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.502 [2024-12-16 10:49:40.477075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.502 [2024-12-16 10:49:40.477084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:40.502 [2024-12-16 10:49:40.477092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:40.502 [2024-12-16 10:49:40.477107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:40.502 [2024-12-16 10:49:40.477114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:40.502 [2024-12-16 10:49:40.477121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:40.502 [2024-12-16 10:49:40.477132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:40.502 [2024-12-16 10:49:40.477138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:40.502 [2024-12-16 10:49:40.477146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:40.502 [2024-12-16 10:49:40.477153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:40.502 [2024-12-16 10:49:40.477167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:40.502 [2024-12-16 10:49:40.477175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:40.502 [2024-12-16 10:49:40.477182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:40.502 [2024-12-16 10:49:40.477189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:40.502 [2024-12-16 10:49:40.477196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:40.502 [2024-12-16 10:49:40.477204] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.502 [2024-12-16 10:49:40.477212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.502 [2024-12-16 10:49:40.477222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.502 [2024-12-16 10:49:40.477229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.502 [2024-12-16 10:49:40.477236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.502 [2024-12-16 10:49:40.477245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.502 [2024-12-16 10:49:40.477252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.502 [2024-12-16 10:49:40.477262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.502 [2024-12-16 10:49:40.477271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:17:40.502 [2024-12-16 10:49:40.477279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.498793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.498862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.764 [2024-12-16 10:49:40.498883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.465 ms 00:17:40.764 [2024-12-16 10:49:40.498895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.499045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.499061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.764 [2024-12-16 10:49:40.499073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:40.764 [2024-12-16 10:49:40.499083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.511221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.511268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.764 [2024-12-16 10:49:40.511285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.049 ms 00:17:40.764 [2024-12-16 10:49:40.511293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.511331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.511341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.764 [2024-12-16 10:49:40.511349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:40.764 [2024-12-16 10:49:40.511357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.511876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.511914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.764 [2024-12-16 10:49:40.511965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:17:40.764 [2024-12-16 10:49:40.511975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.512124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.512134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.764 [2024-12-16 10:49:40.512144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:40.764 [2024-12-16 10:49:40.512153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.518895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.518964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.764 [2024-12-16 10:49:40.518977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.718 ms 00:17:40.764 [2024-12-16 10:49:40.518989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.522816] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:40.764 [2024-12-16 10:49:40.522873] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:40.764 [2024-12-16 10:49:40.522886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.522894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:40.764 [2024-12-16 10:49:40.522902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.805 ms 00:17:40.764 [2024-12-16 10:49:40.522909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.538377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.538428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:40.764 [2024-12-16 10:49:40.538441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.405 ms 00:17:40.764 [2024-12-16 10:49:40.538452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.541663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.541836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:40.764 [2024-12-16 10:49:40.541855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:17:40.764 [2024-12-16 10:49:40.541863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.544307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.544352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:40.764 [2024-12-16 10:49:40.544361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:17:40.764 [2024-12-16 10:49:40.544368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.764 [2024-12-16 10:49:40.544715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.764 [2024-12-16 10:49:40.544727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.765 [2024-12-16 10:49:40.544737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:17:40.765 [2024-12-16 10:49:40.544749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.567890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-12-16 10:49:40.568138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:40.765 [2024-12-16 10:49:40.568205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.123 ms 00:17:40.765 [2024-12-16 10:49:40.568229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.576344] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:40.765 [2024-12-16 10:49:40.579515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-12-16 10:49:40.579651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.765 [2024-12-16 10:49:40.579704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.232 ms 00:17:40.765 [2024-12-16 10:49:40.579735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.579838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-12-16 10:49:40.579866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:40.765 [2024-12-16 10:49:40.579887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:40.765 [2024-12-16 10:49:40.579907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.580008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-12-16 10:49:40.580022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.765 [2024-12-16 10:49:40.580031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:40.765 [2024-12-16 10:49:40.580039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.580073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-12-16 10:49:40.580089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.765 [2024-12-16 10:49:40.580098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:40.765 [2024-12-16 10:49:40.580106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.580139] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:40.765 [2024-12-16 10:49:40.580152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-12-16 10:49:40.580161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:40.765 [2024-12-16 10:49:40.580169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:40.765 [2024-12-16 10:49:40.580177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.585567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-12-16 10:49:40.585620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:40.765 [2024-12-16 10:49:40.585631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.368 ms 00:17:40.765 [2024-12-16 10:49:40.585639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.585726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-12-16 10:49:40.585737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:40.765 [2024-12-16 10:49:40.585746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:40.765 [2024-12-16 10:49:40.585754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-12-16 10:49:40.586870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.341 ms, result 0 00:17:41.707  [2024-12-16T10:49:42.639Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-16T10:49:43.620Z] Copying: 32/1024 [MB] (15 MBps) [2024-12-16T10:49:45.006Z] Copying: 52/1024 [MB] (19 MBps) [2024-12-16T10:49:45.947Z] Copying: 80/1024 [MB] (28 MBps) [2024-12-16T10:49:46.889Z] Copying: 101/1024 [MB] (21 MBps) [2024-12-16T10:49:47.831Z] Copying: 119/1024 [MB] (17 MBps) [2024-12-16T10:49:48.772Z] Copying: 135/1024 [MB] (16 MBps) [2024-12-16T10:49:49.714Z] Copying: 152/1024 [MB] (16 MBps) [2024-12-16T10:49:50.657Z] Copying: 170/1024 [MB] (18 MBps) [2024-12-16T10:49:51.600Z] Copying: 182/1024 [MB] (11 MBps) [2024-12-16T10:49:52.986Z] Copying: 202/1024 [MB] (19 MBps) [2024-12-16T10:49:53.930Z] Copying: 217/1024 [MB] (15 MBps) [2024-12-16T10:49:54.871Z] Copying: 231/1024 [MB] (14 MBps) [2024-12-16T10:49:55.814Z] Copying: 248/1024 [MB] (17 MBps) [2024-12-16T10:49:56.757Z] Copying: 273/1024 [MB] (25 MBps) [2024-12-16T10:49:57.703Z] Copying: 294/1024 [MB] (20 MBps) [2024-12-16T10:49:58.647Z] Copying: 313/1024 [MB] (19 MBps) [2024-12-16T10:50:00.035Z] Copying: 338/1024 [MB] (24 MBps) [2024-12-16T10:50:00.606Z] Copying: 353/1024 [MB] (15 MBps) [2024-12-16T10:50:01.993Z] Copying: 371904/1048576 [kB] (10064 kBps) [2024-12-16T10:50:02.937Z] Copying: 373/1024 [MB] (10 MBps) [2024-12-16T10:50:03.881Z] Copying: 383/1024 [MB] (10 MBps) [2024-12-16T10:50:04.826Z] Copying: 403296/1048576 [kB] (10200 kBps) [2024-12-16T10:50:05.773Z] Copying: 404/1024 [MB] (10 MBps) [2024-12-16T10:50:06.718Z] Copying: 414/1024 [MB] (10 MBps) [2024-12-16T10:50:07.662Z] Copying: 434416/1048576 [kB] (10196 kBps) [2024-12-16T10:50:08.607Z] Copying: 435/1024 [MB] (11 MBps) [2024-12-16T10:50:09.992Z] Copying: 446/1024 [MB] (11 MBps) [2024-12-16T10:50:10.934Z] Copying: 458/1024 [MB] (11 MBps) [2024-12-16T10:50:11.930Z] Copying: 469/1024 [MB] (11 MBps) [2024-12-16T10:50:12.895Z] Copying: 480/1024 [MB] (11 MBps) [2024-12-16T10:50:13.838Z] Copying: 491/1024 [MB] (11 MBps) [2024-12-16T10:50:14.779Z] Copying: 503/1024 [MB] (11 MBps) [2024-12-16T10:50:15.724Z] Copying: 515/1024 [MB] (12 MBps) [2024-12-16T10:50:16.670Z] Copying: 525/1024 [MB] (10 MBps) [2024-12-16T10:50:17.614Z] Copying: 536/1024 [MB] (11 MBps) [2024-12-16T10:50:18.996Z] Copying: 555/1024 [MB] (18 MBps) [2024-12-16T10:50:19.942Z] Copying: 587/1024 [MB] (32 MBps) [2024-12-16T10:50:20.887Z] Copying: 612/1024 [MB] (25 MBps) [2024-12-16T10:50:21.831Z] Copying: 634/1024 [MB] (22 MBps) [2024-12-16T10:50:22.775Z] Copying: 655/1024 [MB] (20 MBps) [2024-12-16T10:50:23.717Z] Copying: 673/1024 [MB] (18 MBps) [2024-12-16T10:50:24.661Z] Copying: 691/1024 [MB] (18 MBps) [2024-12-16T10:50:25.598Z] Copying: 715/1024 [MB] (24 MBps) [2024-12-16T10:50:26.982Z] Copying: 769/1024 [MB] (53 MBps) [2024-12-16T10:50:27.924Z] Copying: 790/1024 [MB] (21 MBps) [2024-12-16T10:50:28.867Z] Copying: 811/1024 [MB] (20 MBps) [2024-12-16T10:50:29.810Z] Copying: 832/1024 [MB] (20 MBps) [2024-12-16T10:50:30.754Z] Copying: 858/1024 [MB] (25 MBps) [2024-12-16T10:50:31.696Z] Copying: 876/1024 [MB] (18 MBps) [2024-12-16T10:50:32.691Z] Copying: 894/1024 [MB] (18 MBps) [2024-12-16T10:50:33.650Z] Copying: 914/1024 [MB] (19 MBps) [2024-12-16T10:50:35.035Z] Copying: 935/1024 [MB] (21 MBps) [2024-12-16T10:50:35.614Z] Copying: 955/1024 [MB] (19 MBps) [2024-12-16T10:50:37.002Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-16T10:50:37.946Z] Copying: 985/1024 [MB] (13 MBps) [2024-12-16T10:50:38.890Z] Copying: 1002/1024 [MB] (17 MBps) [2024-12-16T10:50:39.836Z] Copying: 1013/1024 [MB] (10 MBps) [2024-12-16T10:50:39.836Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-16T10:50:39.836Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 10:50:39.633207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.633263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:39.847 [2024-12-16 10:50:39.633280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:39.847 [2024-12-16 10:50:39.633289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.633311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:39.847 [2024-12-16 10:50:39.634099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.634131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:39.847 [2024-12-16 10:50:39.634144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:18:39.847 [2024-12-16 10:50:39.634153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.636845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.637134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:39.847 [2024-12-16 10:50:39.637167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:18:39.847 [2024-12-16 10:50:39.637176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.656149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.656324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:39.847 [2024-12-16 10:50:39.656353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.951 ms 00:18:39.847 [2024-12-16 10:50:39.656361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.662520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.662561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:39.847 [2024-12-16 10:50:39.662581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:18:39.847 [2024-12-16 10:50:39.662596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.664908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.664971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:39.847 [2024-12-16 10:50:39.664982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:18:39.847 [2024-12-16 10:50:39.664989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.669076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.669146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:39.847 [2024-12-16 10:50:39.669156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.045 ms 00:18:39.847 [2024-12-16 10:50:39.669165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.669287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.669298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:39.847 [2024-12-16 10:50:39.669306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:39.847 [2024-12-16 10:50:39.669315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.672348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.672394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:39.847 [2024-12-16 10:50:39.672404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:18:39.847 [2024-12-16 10:50:39.672412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.675015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.675179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:39.847 [2024-12-16 10:50:39.675196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:18:39.847 [2024-12-16 10:50:39.675203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.677311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.677357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:39.847 [2024-12-16 10:50:39.677366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:18:39.847 [2024-12-16 10:50:39.677373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.679400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.847 [2024-12-16 10:50:39.679446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:39.847 [2024-12-16 10:50:39.679456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.960 ms 00:18:39.847 [2024-12-16 10:50:39.679462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.847 [2024-12-16 10:50:39.679500] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:39.847 [2024-12-16 10:50:39.679515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:39.847 [2024-12-16 10:50:39.679834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.679999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:39.848 [2024-12-16 10:50:39.680343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:39.848 [2024-12-16 10:50:39.680352] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 642f8cb2-110e-4224-a571-58eb4e48d140 00:18:39.848 [2024-12-16 10:50:39.680360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:39.848 [2024-12-16 10:50:39.680368] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:39.848 [2024-12-16 10:50:39.680375] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:39.848 [2024-12-16 10:50:39.680388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:39.848 [2024-12-16 10:50:39.680395] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:39.848 [2024-12-16 10:50:39.680404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:39.848 [2024-12-16 10:50:39.680411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:39.848 [2024-12-16 10:50:39.680418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:39.848 [2024-12-16 10:50:39.680424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:39.848 [2024-12-16 10:50:39.680431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.848 [2024-12-16 10:50:39.680443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:39.848 [2024-12-16 10:50:39.680452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:18:39.848 [2024-12-16 10:50:39.680468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.848 [2024-12-16 10:50:39.682802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.848 [2024-12-16 10:50:39.683000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:39.848 [2024-12-16 10:50:39.683015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.316 ms 00:18:39.848 [2024-12-16 10:50:39.683024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.848 [2024-12-16 10:50:39.683143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.848 [2024-12-16 10:50:39.683153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:39.848 [2024-12-16 10:50:39.683168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:39.848 [2024-12-16 10:50:39.683181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.848 [2024-12-16 10:50:39.689796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.848 [2024-12-16 10:50:39.689847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.848 [2024-12-16 10:50:39.689858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.848 [2024-12-16 10:50:39.689866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.848 [2024-12-16 10:50:39.689966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.848 [2024-12-16 10:50:39.689975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.848 [2024-12-16 10:50:39.689989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.848 [2024-12-16 10:50:39.690002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.848 [2024-12-16 10:50:39.690066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.848 [2024-12-16 10:50:39.690076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.848 [2024-12-16 10:50:39.690085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.848 [2024-12-16 10:50:39.690092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.848 [2024-12-16 10:50:39.690109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.690117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.849 [2024-12-16 10:50:39.690125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.690136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.703328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.703375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.849 [2024-12-16 10:50:39.703386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.703395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.713766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.713999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.849 [2024-12-16 10:50:39.714019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.714036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.714084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.714094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.849 [2024-12-16 10:50:39.714103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.714116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.714153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.714167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.849 [2024-12-16 10:50:39.714175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.714183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.714268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.714279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.849 [2024-12-16 10:50:39.714288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.714295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.714325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.714335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:39.849 [2024-12-16 10:50:39.714344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.714352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.714397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.714407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.849 [2024-12-16 10:50:39.714416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.714424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.714470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.849 [2024-12-16 10:50:39.714482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.849 [2024-12-16 10:50:39.714492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.849 [2024-12-16 10:50:39.714501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.849 [2024-12-16 10:50:39.714636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.397 ms, result 0 00:18:40.110 00:18:40.110 00:18:40.110 10:50:40 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:40.371 [2024-12-16 10:50:40.136677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:40.371 [2024-12-16 10:50:40.136838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86588 ] 00:18:40.371 [2024-12-16 10:50:40.273359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.371 [2024-12-16 10:50:40.324953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.634 [2024-12-16 10:50:40.439120] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.634 [2024-12-16 10:50:40.439449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.634 [2024-12-16 10:50:40.599267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.599325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.634 [2024-12-16 10:50:40.599343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:40.634 [2024-12-16 10:50:40.599352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.599416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.599426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.634 [2024-12-16 10:50:40.599436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:40.634 [2024-12-16 10:50:40.599449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.599469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.634 [2024-12-16 10:50:40.599735] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.634 [2024-12-16 10:50:40.599752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.599761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.634 [2024-12-16 10:50:40.599772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:40.634 [2024-12-16 10:50:40.599781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.601453] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.634 [2024-12-16 10:50:40.605219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.605269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.634 [2024-12-16 10:50:40.605280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.769 ms 00:18:40.634 [2024-12-16 10:50:40.605288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.605368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.605381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.634 [2024-12-16 10:50:40.605392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:40.634 [2024-12-16 10:50:40.605400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.613336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.613379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.634 [2024-12-16 10:50:40.613389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.887 ms 00:18:40.634 [2024-12-16 10:50:40.613397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.613505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.613518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.634 [2024-12-16 10:50:40.613528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:40.634 [2024-12-16 10:50:40.613536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.613596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.613605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.634 [2024-12-16 10:50:40.613614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:40.634 [2024-12-16 10:50:40.613621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.613642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.634 [2024-12-16 10:50:40.615779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.615819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.634 [2024-12-16 10:50:40.615829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:18:40.634 [2024-12-16 10:50:40.615843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.615877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.615888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.634 [2024-12-16 10:50:40.615897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:40.634 [2024-12-16 10:50:40.615905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.615947] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.634 [2024-12-16 10:50:40.615975] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:40.634 [2024-12-16 10:50:40.616016] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.634 [2024-12-16 10:50:40.616036] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:40.634 [2024-12-16 10:50:40.616147] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.634 [2024-12-16 10:50:40.616158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.634 [2024-12-16 10:50:40.616172] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:40.634 [2024-12-16 10:50:40.616183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616203] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616211] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.634 [2024-12-16 10:50:40.616219] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.634 [2024-12-16 10:50:40.616227] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.634 [2024-12-16 10:50:40.616235] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.634 [2024-12-16 10:50:40.616243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.616251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.634 [2024-12-16 10:50:40.616259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:18:40.634 [2024-12-16 10:50:40.616267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.616352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.634 [2024-12-16 10:50:40.616366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.634 [2024-12-16 10:50:40.616377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:40.634 [2024-12-16 10:50:40.616384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.634 [2024-12-16 10:50:40.616485] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.634 [2024-12-16 10:50:40.616497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.634 [2024-12-16 10:50:40.616507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.634 [2024-12-16 10:50:40.616532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.634 [2024-12-16 10:50:40.616556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.634 [2024-12-16 10:50:40.616572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.634 [2024-12-16 10:50:40.616579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.634 [2024-12-16 10:50:40.616590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.634 [2024-12-16 10:50:40.616598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.634 [2024-12-16 10:50:40.616606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:40.634 [2024-12-16 10:50:40.616614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.634 [2024-12-16 10:50:40.616631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.634 [2024-12-16 10:50:40.616655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.634 [2024-12-16 10:50:40.616679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.634 [2024-12-16 10:50:40.616703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.634 [2024-12-16 10:50:40.616733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:40.634 [2024-12-16 10:50:40.616741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.634 [2024-12-16 10:50:40.616763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.635 [2024-12-16 10:50:40.616771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:40.635 [2024-12-16 10:50:40.616779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.635 [2024-12-16 10:50:40.616788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.635 [2024-12-16 10:50:40.616796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:40.635 [2024-12-16 10:50:40.616804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.635 [2024-12-16 10:50:40.616811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.635 [2024-12-16 10:50:40.616819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:40.635 [2024-12-16 10:50:40.616828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.635 [2024-12-16 10:50:40.616835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.635 [2024-12-16 10:50:40.616844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:40.635 [2024-12-16 10:50:40.616851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.635 [2024-12-16 10:50:40.616859] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.635 [2024-12-16 10:50:40.616870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.635 [2024-12-16 10:50:40.616879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.635 [2024-12-16 10:50:40.616890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.635 [2024-12-16 10:50:40.616899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.635 [2024-12-16 10:50:40.616910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.635 [2024-12-16 10:50:40.616918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.635 [2024-12-16 10:50:40.616946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.635 [2024-12-16 10:50:40.616956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.635 [2024-12-16 10:50:40.616964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.635 [2024-12-16 10:50:40.616974] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.635 [2024-12-16 10:50:40.616985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.635 [2024-12-16 10:50:40.616996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.635 [2024-12-16 10:50:40.617004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:40.635 [2024-12-16 10:50:40.617013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:40.635 [2024-12-16 10:50:40.617021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:40.635 [2024-12-16 10:50:40.617029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:40.635 [2024-12-16 10:50:40.617041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:40.635 [2024-12-16 10:50:40.617049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:40.635 [2024-12-16 10:50:40.617058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:40.635 [2024-12-16 10:50:40.617067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:40.635 [2024-12-16 10:50:40.617082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:40.635 [2024-12-16 10:50:40.617091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:40.635 [2024-12-16 10:50:40.617100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:40.635 [2024-12-16 10:50:40.617108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:40.635 [2024-12-16 10:50:40.617117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:40.635 [2024-12-16 10:50:40.617125] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.635 [2024-12-16 10:50:40.617134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.635 [2024-12-16 10:50:40.617145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.635 [2024-12-16 10:50:40.617153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.635 [2024-12-16 10:50:40.617162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.635 [2024-12-16 10:50:40.617170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.635 [2024-12-16 10:50:40.617179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.635 [2024-12-16 10:50:40.617190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.635 [2024-12-16 10:50:40.617200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:18:40.635 [2024-12-16 10:50:40.617209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.644442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.644527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.897 [2024-12-16 10:50:40.644554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.183 ms 00:18:40.897 [2024-12-16 10:50:40.644571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.644740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.644776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.897 [2024-12-16 10:50:40.644790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:40.897 [2024-12-16 10:50:40.644804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.656779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.656823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.897 [2024-12-16 10:50:40.656834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.860 ms 00:18:40.897 [2024-12-16 10:50:40.656842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.656879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.656889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.897 [2024-12-16 10:50:40.656897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:40.897 [2024-12-16 10:50:40.656905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.657470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.657499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.897 [2024-12-16 10:50:40.657510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:18:40.897 [2024-12-16 10:50:40.657519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.657664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.657675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.897 [2024-12-16 10:50:40.657684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:18:40.897 [2024-12-16 10:50:40.657693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.664489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.664533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.897 [2024-12-16 10:50:40.664550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.775 ms 00:18:40.897 [2024-12-16 10:50:40.664557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.668388] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:40.897 [2024-12-16 10:50:40.668444] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:40.897 [2024-12-16 10:50:40.668458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.668466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:40.897 [2024-12-16 10:50:40.668476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.805 ms 00:18:40.897 [2024-12-16 10:50:40.668483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.684119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.684169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:40.897 [2024-12-16 10:50:40.684184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.584 ms 00:18:40.897 [2024-12-16 10:50:40.684197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.687181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.687227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:40.897 [2024-12-16 10:50:40.687236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:18:40.897 [2024-12-16 10:50:40.687244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.689824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.689866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:40.897 [2024-12-16 10:50:40.689876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:18:40.897 [2024-12-16 10:50:40.689883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.690263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.690277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:40.897 [2024-12-16 10:50:40.690286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:18:40.897 [2024-12-16 10:50:40.690295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.713355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.713587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:40.897 [2024-12-16 10:50:40.713608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.044 ms 00:18:40.897 [2024-12-16 10:50:40.713618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.721708] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:40.897 [2024-12-16 10:50:40.724848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.725025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:40.897 [2024-12-16 10:50:40.725052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.116 ms 00:18:40.897 [2024-12-16 10:50:40.725061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.897 [2024-12-16 10:50:40.725146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.897 [2024-12-16 10:50:40.725158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:40.897 [2024-12-16 10:50:40.725168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:40.897 [2024-12-16 10:50:40.725176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.898 [2024-12-16 10:50:40.725245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.898 [2024-12-16 10:50:40.725255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:40.898 [2024-12-16 10:50:40.725269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:40.898 [2024-12-16 10:50:40.725279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.898 [2024-12-16 10:50:40.725309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.898 [2024-12-16 10:50:40.725318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:40.898 [2024-12-16 10:50:40.725330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:40.898 [2024-12-16 10:50:40.725338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.898 [2024-12-16 10:50:40.725374] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:40.898 [2024-12-16 10:50:40.725388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.898 [2024-12-16 10:50:40.725396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:40.898 [2024-12-16 10:50:40.725404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:40.898 [2024-12-16 10:50:40.725412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.898 [2024-12-16 10:50:40.730871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.898 [2024-12-16 10:50:40.730921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:40.898 [2024-12-16 10:50:40.730956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.437 ms 00:18:40.898 [2024-12-16 10:50:40.730963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.898 [2024-12-16 10:50:40.731050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.898 [2024-12-16 10:50:40.731061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:40.898 [2024-12-16 10:50:40.731070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:40.898 [2024-12-16 10:50:40.731079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.898 [2024-12-16 10:50:40.733437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.683 ms, result 0 00:18:42.288  [2024-12-16T10:50:43.223Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-16T10:50:44.164Z] Copying: 21/1024 [MB] (10 MBps) [2024-12-16T10:50:45.107Z] Copying: 31/1024 [MB] (10 MBps) [2024-12-16T10:50:46.047Z] Copying: 52/1024 [MB] (20 MBps) [2024-12-16T10:50:46.990Z] Copying: 66/1024 [MB] (14 MBps) [2024-12-16T10:50:47.933Z] Copying: 81/1024 [MB] (15 MBps) [2024-12-16T10:50:49.321Z] Copying: 98/1024 [MB] (16 MBps) [2024-12-16T10:50:50.267Z] Copying: 117/1024 [MB] (19 MBps) [2024-12-16T10:50:51.210Z] Copying: 133/1024 [MB] (16 MBps) [2024-12-16T10:50:52.153Z] Copying: 154/1024 [MB] (20 MBps) [2024-12-16T10:50:53.131Z] Copying: 173/1024 [MB] (19 MBps) [2024-12-16T10:50:54.075Z] Copying: 187/1024 [MB] (14 MBps) [2024-12-16T10:50:55.017Z] Copying: 204/1024 [MB] (17 MBps) [2024-12-16T10:50:55.961Z] Copying: 221/1024 [MB] (16 MBps) [2024-12-16T10:50:56.970Z] Copying: 235/1024 [MB] (14 MBps) [2024-12-16T10:50:57.916Z] Copying: 248/1024 [MB] (12 MBps) [2024-12-16T10:50:59.304Z] Copying: 271/1024 [MB] (22 MBps) [2024-12-16T10:51:00.246Z] Copying: 283/1024 [MB] (11 MBps) [2024-12-16T10:51:01.190Z] Copying: 297/1024 [MB] (14 MBps) [2024-12-16T10:51:02.136Z] Copying: 312/1024 [MB] (15 MBps) [2024-12-16T10:51:03.079Z] Copying: 323/1024 [MB] (10 MBps) [2024-12-16T10:51:04.021Z] Copying: 333/1024 [MB] (10 MBps) [2024-12-16T10:51:04.983Z] Copying: 374/1024 [MB] (40 MBps) [2024-12-16T10:51:05.950Z] Copying: 398/1024 [MB] (23 MBps) [2024-12-16T10:51:07.349Z] Copying: 418/1024 [MB] (20 MBps) [2024-12-16T10:51:07.924Z] Copying: 431/1024 [MB] (12 MBps) [2024-12-16T10:51:09.310Z] Copying: 450/1024 [MB] (18 MBps) [2024-12-16T10:51:10.250Z] Copying: 460/1024 [MB] (10 MBps) [2024-12-16T10:51:11.193Z] Copying: 482/1024 [MB] (21 MBps) [2024-12-16T10:51:12.138Z] Copying: 500/1024 [MB] (18 MBps) [2024-12-16T10:51:13.082Z] Copying: 515/1024 [MB] (15 MBps) [2024-12-16T10:51:14.027Z] Copying: 531/1024 [MB] (15 MBps) [2024-12-16T10:51:14.974Z] Copying: 548/1024 [MB] (16 MBps) [2024-12-16T10:51:15.926Z] Copying: 566/1024 [MB] (18 MBps) [2024-12-16T10:51:17.316Z] Copying: 580/1024 [MB] (13 MBps) [2024-12-16T10:51:18.261Z] Copying: 591/1024 [MB] (10 MBps) [2024-12-16T10:51:19.203Z] Copying: 601/1024 [MB] (10 MBps) [2024-12-16T10:51:20.175Z] Copying: 611/1024 [MB] (10 MBps) [2024-12-16T10:51:21.120Z] Copying: 621/1024 [MB] (10 MBps) [2024-12-16T10:51:22.065Z] Copying: 631/1024 [MB] (10 MBps) [2024-12-16T10:51:23.008Z] Copying: 642/1024 [MB] (10 MBps) [2024-12-16T10:51:23.953Z] Copying: 652/1024 [MB] (10 MBps) [2024-12-16T10:51:25.346Z] Copying: 663/1024 [MB] (11 MBps) [2024-12-16T10:51:25.921Z] Copying: 674/1024 [MB] (10 MBps) [2024-12-16T10:51:27.311Z] Copying: 685/1024 [MB] (11 MBps) [2024-12-16T10:51:28.255Z] Copying: 696/1024 [MB] (10 MBps) [2024-12-16T10:51:29.199Z] Copying: 707/1024 [MB] (10 MBps) [2024-12-16T10:51:30.145Z] Copying: 717/1024 [MB] (10 MBps) [2024-12-16T10:51:31.090Z] Copying: 728/1024 [MB] (10 MBps) [2024-12-16T10:51:32.033Z] Copying: 739/1024 [MB] (10 MBps) [2024-12-16T10:51:32.977Z] Copying: 749/1024 [MB] (10 MBps) [2024-12-16T10:51:33.922Z] Copying: 760/1024 [MB] (10 MBps) [2024-12-16T10:51:35.311Z] Copying: 771/1024 [MB] (10 MBps) [2024-12-16T10:51:36.259Z] Copying: 788/1024 [MB] (17 MBps) [2024-12-16T10:51:37.204Z] Copying: 801/1024 [MB] (12 MBps) [2024-12-16T10:51:38.150Z] Copying: 813/1024 [MB] (12 MBps) [2024-12-16T10:51:39.094Z] Copying: 831/1024 [MB] (17 MBps) [2024-12-16T10:51:40.035Z] Copying: 847/1024 [MB] (16 MBps) [2024-12-16T10:51:40.980Z] Copying: 868/1024 [MB] (20 MBps) [2024-12-16T10:51:41.925Z] Copying: 885/1024 [MB] (17 MBps) [2024-12-16T10:51:42.924Z] Copying: 900/1024 [MB] (15 MBps) [2024-12-16T10:51:44.313Z] Copying: 916/1024 [MB] (16 MBps) [2024-12-16T10:51:45.259Z] Copying: 934/1024 [MB] (17 MBps) [2024-12-16T10:51:46.204Z] Copying: 947/1024 [MB] (13 MBps) [2024-12-16T10:51:47.147Z] Copying: 966/1024 [MB] (19 MBps) [2024-12-16T10:51:48.093Z] Copying: 980/1024 [MB] (13 MBps) [2024-12-16T10:51:49.036Z] Copying: 991/1024 [MB] (11 MBps) [2024-12-16T10:51:49.979Z] Copying: 1004/1024 [MB] (12 MBps) [2024-12-16T10:51:50.242Z] Copying: 1020/1024 [MB] (16 MBps) [2024-12-16T10:51:50.242Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-16 10:51:50.098757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.098961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.253 [2024-12-16 10:51:50.098983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.253 [2024-12-16 10:51:50.098994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.099024] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.253 [2024-12-16 10:51:50.099568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.099597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.253 [2024-12-16 10:51:50.099606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:50.253 [2024-12-16 10:51:50.099613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.099818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.099828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.253 [2024-12-16 10:51:50.099840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:19:50.253 [2024-12-16 10:51:50.099852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.103756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.103785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.253 [2024-12-16 10:51:50.103796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.890 ms 00:19:50.253 [2024-12-16 10:51:50.103803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.110263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.110388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.253 [2024-12-16 10:51:50.110405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:19:50.253 [2024-12-16 10:51:50.110413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.112486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.112521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.253 [2024-12-16 10:51:50.112531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:19:50.253 [2024-12-16 10:51:50.112538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.116572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.116608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.253 [2024-12-16 10:51:50.116618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.001 ms 00:19:50.253 [2024-12-16 10:51:50.116626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.116766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.116777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.253 [2024-12-16 10:51:50.116786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:50.253 [2024-12-16 10:51:50.116794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.120191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.120226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.253 [2024-12-16 10:51:50.120518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:19:50.253 [2024-12-16 10:51:50.120526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.122714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.122743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.253 [2024-12-16 10:51:50.122752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:19:50.253 [2024-12-16 10:51:50.122759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.124548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.124578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.253 [2024-12-16 10:51:50.124587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:19:50.253 [2024-12-16 10:51:50.124594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.126367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.253 [2024-12-16 10:51:50.126402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.253 [2024-12-16 10:51:50.126410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:19:50.253 [2024-12-16 10:51:50.126417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.253 [2024-12-16 10:51:50.126445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.253 [2024-12-16 10:51:50.126468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.253 [2024-12-16 10:51:50.126604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.126994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.254 [2024-12-16 10:51:50.127244] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.254 [2024-12-16 10:51:50.127252] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 642f8cb2-110e-4224-a571-58eb4e48d140 00:19:50.254 [2024-12-16 10:51:50.127260] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.254 [2024-12-16 10:51:50.127267] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.254 [2024-12-16 10:51:50.127274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.254 [2024-12-16 10:51:50.127281] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.254 [2024-12-16 10:51:50.127288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.254 [2024-12-16 10:51:50.127296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.255 [2024-12-16 10:51:50.127303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.255 [2024-12-16 10:51:50.127310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.255 [2024-12-16 10:51:50.127316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.255 [2024-12-16 10:51:50.127324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.255 [2024-12-16 10:51:50.127331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.255 [2024-12-16 10:51:50.127344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:19:50.255 [2024-12-16 10:51:50.127351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.129310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.255 [2024-12-16 10:51:50.129357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.255 [2024-12-16 10:51:50.129382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:19:50.255 [2024-12-16 10:51:50.129559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.129667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.255 [2024-12-16 10:51:50.129703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.255 [2024-12-16 10:51:50.129727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:50.255 [2024-12-16 10:51:50.129846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.134923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.134975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.255 [2024-12-16 10:51:50.134984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.134992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.135041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.135052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.255 [2024-12-16 10:51:50.135060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.135067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.135122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.135132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.255 [2024-12-16 10:51:50.135139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.135147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.135162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.135170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.255 [2024-12-16 10:51:50.135180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.135186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.145431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.145472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.255 [2024-12-16 10:51:50.145482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.145489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.154147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.154192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.255 [2024-12-16 10:51:50.154210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.154219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.154266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.154276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.255 [2024-12-16 10:51:50.154290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.154298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.154324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.154334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.255 [2024-12-16 10:51:50.154345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.154355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.154422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.154432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.255 [2024-12-16 10:51:50.154440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.154448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.154480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.154490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.255 [2024-12-16 10:51:50.154498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.154506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.154547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.154556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.255 [2024-12-16 10:51:50.154563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.154575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.154621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.255 [2024-12-16 10:51:50.154631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.255 [2024-12-16 10:51:50.154640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.255 [2024-12-16 10:51:50.154650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.255 [2024-12-16 10:51:50.154778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.986 ms, result 0 00:19:50.517 00:19:50.517 00:19:50.517 10:51:50 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:53.065 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:53.065 10:51:52 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:53.065 [2024-12-16 10:51:52.615127] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:53.065 [2024-12-16 10:51:52.615218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87339 ] 00:19:53.065 [2024-12-16 10:51:52.748749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.065 [2024-12-16 10:51:52.787053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.065 [2024-12-16 10:51:52.900508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:53.065 [2024-12-16 10:51:52.900593] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:53.328 [2024-12-16 10:51:53.059290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.059355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:53.328 [2024-12-16 10:51:53.059385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:53.328 [2024-12-16 10:51:53.059394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.059451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.059465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.328 [2024-12-16 10:51:53.059475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:53.328 [2024-12-16 10:51:53.059490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.059511] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:53.328 [2024-12-16 10:51:53.060101] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:53.328 [2024-12-16 10:51:53.060154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.060166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.328 [2024-12-16 10:51:53.060181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:19:53.328 [2024-12-16 10:51:53.060190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.061945] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:53.328 [2024-12-16 10:51:53.065661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.065852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:53.328 [2024-12-16 10:51:53.065890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.736 ms 00:19:53.328 [2024-12-16 10:51:53.065904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.065998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.066019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:53.328 [2024-12-16 10:51:53.066040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:53.328 [2024-12-16 10:51:53.066053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.074216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.074260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.328 [2024-12-16 10:51:53.074272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.095 ms 00:19:53.328 [2024-12-16 10:51:53.074280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.074388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.074398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.328 [2024-12-16 10:51:53.074408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:53.328 [2024-12-16 10:51:53.074416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.074487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.074498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:53.328 [2024-12-16 10:51:53.074507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:53.328 [2024-12-16 10:51:53.074515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.074539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:53.328 [2024-12-16 10:51:53.076611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.076798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.328 [2024-12-16 10:51:53.076831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.078 ms 00:19:53.328 [2024-12-16 10:51:53.076843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.076900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.328 [2024-12-16 10:51:53.076914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:53.328 [2024-12-16 10:51:53.076951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:53.328 [2024-12-16 10:51:53.076965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.328 [2024-12-16 10:51:53.076998] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:53.328 [2024-12-16 10:51:53.077034] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:53.328 [2024-12-16 10:51:53.077080] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:53.328 [2024-12-16 10:51:53.077103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:53.328 [2024-12-16 10:51:53.077210] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:53.328 [2024-12-16 10:51:53.077222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:53.329 [2024-12-16 10:51:53.077234] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:53.329 [2024-12-16 10:51:53.077244] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077257] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077266] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:53.329 [2024-12-16 10:51:53.077274] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:53.329 [2024-12-16 10:51:53.077281] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:53.329 [2024-12-16 10:51:53.077289] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:53.329 [2024-12-16 10:51:53.077298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.329 [2024-12-16 10:51:53.077309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:53.329 [2024-12-16 10:51:53.077318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:53.329 [2024-12-16 10:51:53.077326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.329 [2024-12-16 10:51:53.077417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.329 [2024-12-16 10:51:53.077426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:53.329 [2024-12-16 10:51:53.077438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:53.329 [2024-12-16 10:51:53.077446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.329 [2024-12-16 10:51:53.077545] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:53.329 [2024-12-16 10:51:53.077557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:53.329 [2024-12-16 10:51:53.077566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:53.329 [2024-12-16 10:51:53.077592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:53.329 [2024-12-16 10:51:53.077617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.329 [2024-12-16 10:51:53.077635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:53.329 [2024-12-16 10:51:53.077644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:53.329 [2024-12-16 10:51:53.077654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.329 [2024-12-16 10:51:53.077661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:53.329 [2024-12-16 10:51:53.077669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:53.329 [2024-12-16 10:51:53.077678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:53.329 [2024-12-16 10:51:53.077695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:53.329 [2024-12-16 10:51:53.077716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:53.329 [2024-12-16 10:51:53.077736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:53.329 [2024-12-16 10:51:53.077757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:53.329 [2024-12-16 10:51:53.077784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:53.329 [2024-12-16 10:51:53.077805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.329 [2024-12-16 10:51:53.077819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:53.329 [2024-12-16 10:51:53.077825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:53.329 [2024-12-16 10:51:53.077832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.329 [2024-12-16 10:51:53.077839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:53.329 [2024-12-16 10:51:53.077846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:53.329 [2024-12-16 10:51:53.077852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:53.329 [2024-12-16 10:51:53.077866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:53.329 [2024-12-16 10:51:53.077872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077880] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:53.329 [2024-12-16 10:51:53.077890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:53.329 [2024-12-16 10:51:53.077898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.329 [2024-12-16 10:51:53.077915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:53.329 [2024-12-16 10:51:53.077924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:53.329 [2024-12-16 10:51:53.077947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:53.329 [2024-12-16 10:51:53.077955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:53.329 [2024-12-16 10:51:53.077962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:53.329 [2024-12-16 10:51:53.077969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:53.329 [2024-12-16 10:51:53.077977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:53.329 [2024-12-16 10:51:53.077987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.329 [2024-12-16 10:51:53.077996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:53.329 [2024-12-16 10:51:53.078004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:53.329 [2024-12-16 10:51:53.078012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:53.329 [2024-12-16 10:51:53.078019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:53.329 [2024-12-16 10:51:53.078027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:53.329 [2024-12-16 10:51:53.078037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:53.329 [2024-12-16 10:51:53.078044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:53.329 [2024-12-16 10:51:53.078052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:53.329 [2024-12-16 10:51:53.078060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:53.329 [2024-12-16 10:51:53.078073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:53.329 [2024-12-16 10:51:53.078089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:53.329 [2024-12-16 10:51:53.078096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:53.329 [2024-12-16 10:51:53.078103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:53.329 [2024-12-16 10:51:53.078111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:53.329 [2024-12-16 10:51:53.078119] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:53.329 [2024-12-16 10:51:53.078127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.329 [2024-12-16 10:51:53.078135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:53.329 [2024-12-16 10:51:53.078143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:53.329 [2024-12-16 10:51:53.078150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:53.329 [2024-12-16 10:51:53.078158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:53.329 [2024-12-16 10:51:53.078165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.329 [2024-12-16 10:51:53.078175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:53.329 [2024-12-16 10:51:53.078183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:19:53.329 [2024-12-16 10:51:53.078191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.329 [2024-12-16 10:51:53.097897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.329 [2024-12-16 10:51:53.097957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.329 [2024-12-16 10:51:53.097971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.658 ms 00:19:53.329 [2024-12-16 10:51:53.097979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.329 [2024-12-16 10:51:53.098076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.329 [2024-12-16 10:51:53.098085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:53.330 [2024-12-16 10:51:53.098093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:53.330 [2024-12-16 10:51:53.098100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.106435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.106471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.330 [2024-12-16 10:51:53.106482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.281 ms 00:19:53.330 [2024-12-16 10:51:53.106491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.106523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.106533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.330 [2024-12-16 10:51:53.106542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.330 [2024-12-16 10:51:53.106550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.106898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.106920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.330 [2024-12-16 10:51:53.106953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:53.330 [2024-12-16 10:51:53.106962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.107101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.107118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.330 [2024-12-16 10:51:53.107133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:19:53.330 [2024-12-16 10:51:53.107143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.111945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.111975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.330 [2024-12-16 10:51:53.111990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.778 ms 00:19:53.330 [2024-12-16 10:51:53.111999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.114391] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:53.330 [2024-12-16 10:51:53.114424] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:53.330 [2024-12-16 10:51:53.114438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.114445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:53.330 [2024-12-16 10:51:53.114453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:19:53.330 [2024-12-16 10:51:53.114460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.129504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.129628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:53.330 [2024-12-16 10:51:53.129653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.006 ms 00:19:53.330 [2024-12-16 10:51:53.129662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.131807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.131835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:53.330 [2024-12-16 10:51:53.131845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:19:53.330 [2024-12-16 10:51:53.131852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.133387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.133411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:53.330 [2024-12-16 10:51:53.133420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.503 ms 00:19:53.330 [2024-12-16 10:51:53.133426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.133738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.133755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:53.330 [2024-12-16 10:51:53.133764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:53.330 [2024-12-16 10:51:53.133772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.149651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.149704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:53.330 [2024-12-16 10:51:53.149719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.862 ms 00:19:53.330 [2024-12-16 10:51:53.149727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.157168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:53.330 [2024-12-16 10:51:53.159701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.159731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:53.330 [2024-12-16 10:51:53.159742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.927 ms 00:19:53.330 [2024-12-16 10:51:53.159754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.159813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.159825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:53.330 [2024-12-16 10:51:53.159834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:53.330 [2024-12-16 10:51:53.159841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.159947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.159958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:53.330 [2024-12-16 10:51:53.159967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:53.330 [2024-12-16 10:51:53.159977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.160009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.160018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:53.330 [2024-12-16 10:51:53.160039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:53.330 [2024-12-16 10:51:53.160050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.160078] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:53.330 [2024-12-16 10:51:53.160087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.160095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:53.330 [2024-12-16 10:51:53.160104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:53.330 [2024-12-16 10:51:53.160112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.164064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.164098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:53.330 [2024-12-16 10:51:53.164107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:19:53.330 [2024-12-16 10:51:53.164115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.164183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.330 [2024-12-16 10:51:53.164192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:53.330 [2024-12-16 10:51:53.164201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:53.330 [2024-12-16 10:51:53.164208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-16 10:51:53.165142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.464 ms, result 0 00:19:54.273  [2024-12-16T10:51:55.208Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-16T10:51:56.594Z] Copying: 34/1024 [MB] (16 MBps) [2024-12-16T10:51:57.537Z] Copying: 47/1024 [MB] (13 MBps) [2024-12-16T10:51:58.480Z] Copying: 63/1024 [MB] (15 MBps) [2024-12-16T10:51:59.423Z] Copying: 87/1024 [MB] (23 MBps) [2024-12-16T10:52:00.366Z] Copying: 110/1024 [MB] (23 MBps) [2024-12-16T10:52:01.309Z] Copying: 134/1024 [MB] (24 MBps) [2024-12-16T10:52:02.253Z] Copying: 152/1024 [MB] (18 MBps) [2024-12-16T10:52:03.194Z] Copying: 168/1024 [MB] (15 MBps) [2024-12-16T10:52:04.583Z] Copying: 182400/1048576 [kB] (10048 kBps) [2024-12-16T10:52:05.527Z] Copying: 190/1024 [MB] (12 MBps) [2024-12-16T10:52:06.538Z] Copying: 200/1024 [MB] (10 MBps) [2024-12-16T10:52:07.479Z] Copying: 211/1024 [MB] (10 MBps) [2024-12-16T10:52:08.422Z] Copying: 221/1024 [MB] (10 MBps) [2024-12-16T10:52:09.368Z] Copying: 231/1024 [MB] (10 MBps) [2024-12-16T10:52:10.310Z] Copying: 242/1024 [MB] (10 MBps) [2024-12-16T10:52:11.252Z] Copying: 254/1024 [MB] (11 MBps) [2024-12-16T10:52:12.191Z] Copying: 268/1024 [MB] (14 MBps) [2024-12-16T10:52:13.580Z] Copying: 301/1024 [MB] (33 MBps) [2024-12-16T10:52:14.525Z] Copying: 312/1024 [MB] (11 MBps) [2024-12-16T10:52:15.468Z] Copying: 324/1024 [MB] (11 MBps) [2024-12-16T10:52:16.412Z] Copying: 342/1024 [MB] (17 MBps) [2024-12-16T10:52:17.353Z] Copying: 364/1024 [MB] (22 MBps) [2024-12-16T10:52:18.305Z] Copying: 384/1024 [MB] (19 MBps) [2024-12-16T10:52:19.247Z] Copying: 399/1024 [MB] (14 MBps) [2024-12-16T10:52:20.193Z] Copying: 412/1024 [MB] (13 MBps) [2024-12-16T10:52:21.579Z] Copying: 431/1024 [MB] (19 MBps) [2024-12-16T10:52:22.522Z] Copying: 445/1024 [MB] (13 MBps) [2024-12-16T10:52:23.466Z] Copying: 461/1024 [MB] (15 MBps) [2024-12-16T10:52:24.411Z] Copying: 477/1024 [MB] (16 MBps) [2024-12-16T10:52:25.355Z] Copying: 499/1024 [MB] (21 MBps) [2024-12-16T10:52:26.299Z] Copying: 518/1024 [MB] (19 MBps) [2024-12-16T10:52:27.243Z] Copying: 531/1024 [MB] (12 MBps) [2024-12-16T10:52:28.185Z] Copying: 549/1024 [MB] (17 MBps) [2024-12-16T10:52:29.199Z] Copying: 565/1024 [MB] (16 MBps) [2024-12-16T10:52:30.591Z] Copying: 576/1024 [MB] (10 MBps) [2024-12-16T10:52:31.537Z] Copying: 586/1024 [MB] (10 MBps) [2024-12-16T10:52:32.483Z] Copying: 596/1024 [MB] (10 MBps) [2024-12-16T10:52:33.428Z] Copying: 621012/1048576 [kB] (10204 kBps) [2024-12-16T10:52:34.374Z] Copying: 616/1024 [MB] (10 MBps) [2024-12-16T10:52:35.326Z] Copying: 627/1024 [MB] (11 MBps) [2024-12-16T10:52:36.271Z] Copying: 638/1024 [MB] (11 MBps) [2024-12-16T10:52:37.215Z] Copying: 648/1024 [MB] (10 MBps) [2024-12-16T10:52:38.603Z] Copying: 659/1024 [MB] (10 MBps) [2024-12-16T10:52:39.546Z] Copying: 671/1024 [MB] (12 MBps) [2024-12-16T10:52:40.490Z] Copying: 683/1024 [MB] (12 MBps) [2024-12-16T10:52:41.434Z] Copying: 695/1024 [MB] (11 MBps) [2024-12-16T10:52:42.380Z] Copying: 706/1024 [MB] (10 MBps) [2024-12-16T10:52:43.326Z] Copying: 717/1024 [MB] (10 MBps) [2024-12-16T10:52:44.269Z] Copying: 727/1024 [MB] (10 MBps) [2024-12-16T10:52:45.218Z] Copying: 737/1024 [MB] (10 MBps) [2024-12-16T10:52:46.610Z] Copying: 765320/1048576 [kB] (10168 kBps) [2024-12-16T10:52:47.183Z] Copying: 757/1024 [MB] (10 MBps) [2024-12-16T10:52:48.573Z] Copying: 767/1024 [MB] (10 MBps) [2024-12-16T10:52:49.516Z] Copying: 796400/1048576 [kB] (10208 kBps) [2024-12-16T10:52:50.451Z] Copying: 787/1024 [MB] (10 MBps) [2024-12-16T10:52:51.391Z] Copying: 817/1024 [MB] (29 MBps) [2024-12-16T10:52:52.398Z] Copying: 858/1024 [MB] (40 MBps) [2024-12-16T10:52:53.341Z] Copying: 880/1024 [MB] (22 MBps) [2024-12-16T10:52:54.282Z] Copying: 907/1024 [MB] (26 MBps) [2024-12-16T10:52:55.224Z] Copying: 932/1024 [MB] (25 MBps) [2024-12-16T10:52:56.611Z] Copying: 950/1024 [MB] (18 MBps) [2024-12-16T10:52:57.184Z] Copying: 970/1024 [MB] (19 MBps) [2024-12-16T10:52:58.573Z] Copying: 991/1024 [MB] (21 MBps) [2024-12-16T10:52:59.513Z] Copying: 1010/1024 [MB] (18 MBps) [2024-12-16T10:52:59.775Z] Copying: 1023/1024 [MB] (12 MBps) [2024-12-16T10:52:59.775Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-16 10:52:59.583079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.786 [2024-12-16 10:52:59.583135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.786 [2024-12-16 10:52:59.583149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:59.786 [2024-12-16 10:52:59.583157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.786 [2024-12-16 10:52:59.583183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.786 [2024-12-16 10:52:59.583647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.786 [2024-12-16 10:52:59.583662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.786 [2024-12-16 10:52:59.583671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:20:59.786 [2024-12-16 10:52:59.583684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.786 [2024-12-16 10:52:59.593364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.786 [2024-12-16 10:52:59.593490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.786 [2024-12-16 10:52:59.593507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.125 ms 00:20:59.786 [2024-12-16 10:52:59.593525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.786 [2024-12-16 10:52:59.614011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.786 [2024-12-16 10:52:59.614044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:59.786 [2024-12-16 10:52:59.614055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.468 ms 00:20:59.786 [2024-12-16 10:52:59.614062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.786 [2024-12-16 10:52:59.620171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.786 [2024-12-16 10:52:59.620287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:59.786 [2024-12-16 10:52:59.620302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.082 ms 00:20:59.786 [2024-12-16 10:52:59.620312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.786 [2024-12-16 10:52:59.622345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.786 [2024-12-16 10:52:59.622376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:59.786 [2024-12-16 10:52:59.622386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:20:59.786 [2024-12-16 10:52:59.622393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.786 [2024-12-16 10:52:59.625996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.786 [2024-12-16 10:52:59.626027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:59.786 [2024-12-16 10:52:59.626037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:20:59.786 [2024-12-16 10:52:59.626044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.048 [2024-12-16 10:52:59.793371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.048 [2024-12-16 10:52:59.793426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.048 [2024-12-16 10:52:59.793441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 167.293 ms 00:21:00.048 [2024-12-16 10:52:59.793449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.048 [2024-12-16 10:52:59.796145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.048 [2024-12-16 10:52:59.796177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:00.048 [2024-12-16 10:52:59.796187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:21:00.048 [2024-12-16 10:52:59.796195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.048 [2024-12-16 10:52:59.798413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.048 [2024-12-16 10:52:59.798541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:00.048 [2024-12-16 10:52:59.798557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:21:00.048 [2024-12-16 10:52:59.798564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.048 [2024-12-16 10:52:59.800200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.048 [2024-12-16 10:52:59.800233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.048 [2024-12-16 10:52:59.800241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:21:00.048 [2024-12-16 10:52:59.800248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.048 [2024-12-16 10:52:59.801922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.048 [2024-12-16 10:52:59.801966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.048 [2024-12-16 10:52:59.801975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:21:00.048 [2024-12-16 10:52:59.801981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.048 [2024-12-16 10:52:59.802010] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.048 [2024-12-16 10:52:59.802024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108288 / 261120 wr_cnt: 1 state: open 00:21:00.048 [2024-12-16 10:52:59.802034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.048 [2024-12-16 10:52:59.802413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.049 [2024-12-16 10:52:59.802788] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.049 [2024-12-16 10:52:59.802796] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 642f8cb2-110e-4224-a571-58eb4e48d140 00:21:00.049 [2024-12-16 10:52:59.802804] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108288 00:21:00.049 [2024-12-16 10:52:59.802811] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109248 00:21:00.049 [2024-12-16 10:52:59.802818] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108288 00:21:00.049 [2024-12-16 10:52:59.802834] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:21:00.049 [2024-12-16 10:52:59.802842] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.049 [2024-12-16 10:52:59.802849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.049 [2024-12-16 10:52:59.802856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.049 [2024-12-16 10:52:59.802863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.049 [2024-12-16 10:52:59.802869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.049 [2024-12-16 10:52:59.802876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.049 [2024-12-16 10:52:59.802883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.049 [2024-12-16 10:52:59.802892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:21:00.049 [2024-12-16 10:52:59.802902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.804393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.049 [2024-12-16 10:52:59.804414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.049 [2024-12-16 10:52:59.804425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:21:00.049 [2024-12-16 10:52:59.804432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.804512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.049 [2024-12-16 10:52:59.804521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.049 [2024-12-16 10:52:59.804529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:00.049 [2024-12-16 10:52:59.804536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.809246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.809375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.049 [2024-12-16 10:52:59.809391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.809399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.809454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.809462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.049 [2024-12-16 10:52:59.809470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.809478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.809514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.809527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.049 [2024-12-16 10:52:59.809535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.809542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.809557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.809565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.049 [2024-12-16 10:52:59.809572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.809580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.818983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.819022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.049 [2024-12-16 10:52:59.819032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.819039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.826991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.827031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.049 [2024-12-16 10:52:59.827041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.827049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.827092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.827101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.049 [2024-12-16 10:52:59.827118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.827126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.827151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.827159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.049 [2024-12-16 10:52:59.827167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.827174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.827235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.827244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.049 [2024-12-16 10:52:59.827253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.827263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.827290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.827300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.049 [2024-12-16 10:52:59.827311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.827318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.827358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.827366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.049 [2024-12-16 10:52:59.827375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.827384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.827428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.049 [2024-12-16 10:52:59.827444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.049 [2024-12-16 10:52:59.827452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.049 [2024-12-16 10:52:59.827460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.049 [2024-12-16 10:52:59.827582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 244.472 ms, result 0 00:21:00.621 00:21:00.621 00:21:00.883 10:53:00 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:00.883 [2024-12-16 10:53:00.677768] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:00.883 [2024-12-16 10:53:00.677916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88039 ] 00:21:00.883 [2024-12-16 10:53:00.814074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.883 [2024-12-16 10:53:00.867389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.144 [2024-12-16 10:53:00.983655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.144 [2024-12-16 10:53:00.983745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.407 [2024-12-16 10:53:01.144539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.144596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:01.407 [2024-12-16 10:53:01.144615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:01.407 [2024-12-16 10:53:01.144626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.144679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.144690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.407 [2024-12-16 10:53:01.144699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:01.407 [2024-12-16 10:53:01.144725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.144747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:01.407 [2024-12-16 10:53:01.145034] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:01.407 [2024-12-16 10:53:01.145051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.145060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.407 [2024-12-16 10:53:01.145073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:21:01.407 [2024-12-16 10:53:01.145081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.146740] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:01.407 [2024-12-16 10:53:01.150463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.150512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:01.407 [2024-12-16 10:53:01.150524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:21:01.407 [2024-12-16 10:53:01.150533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.150607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.150620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:01.407 [2024-12-16 10:53:01.150628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:01.407 [2024-12-16 10:53:01.150636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.158645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.158690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.407 [2024-12-16 10:53:01.158702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.968 ms 00:21:01.407 [2024-12-16 10:53:01.158715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.158812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.158822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.407 [2024-12-16 10:53:01.158834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:01.407 [2024-12-16 10:53:01.158842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.158899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.158914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:01.407 [2024-12-16 10:53:01.158951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:01.407 [2024-12-16 10:53:01.158960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.158984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:01.407 [2024-12-16 10:53:01.160995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.161028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.407 [2024-12-16 10:53:01.161039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:21:01.407 [2024-12-16 10:53:01.161046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.161088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.161097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:01.407 [2024-12-16 10:53:01.161106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:01.407 [2024-12-16 10:53:01.161113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.161139] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:01.407 [2024-12-16 10:53:01.161168] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:01.407 [2024-12-16 10:53:01.161216] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:01.407 [2024-12-16 10:53:01.161236] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:01.407 [2024-12-16 10:53:01.161344] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:01.407 [2024-12-16 10:53:01.161355] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:01.407 [2024-12-16 10:53:01.161367] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:01.407 [2024-12-16 10:53:01.161377] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:01.407 [2024-12-16 10:53:01.161389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:01.407 [2024-12-16 10:53:01.161397] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:01.407 [2024-12-16 10:53:01.161405] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:01.407 [2024-12-16 10:53:01.161417] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:01.407 [2024-12-16 10:53:01.161425] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:01.407 [2024-12-16 10:53:01.161436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.161444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:01.407 [2024-12-16 10:53:01.161452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:21:01.407 [2024-12-16 10:53:01.161460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.161542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.407 [2024-12-16 10:53:01.161551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:01.407 [2024-12-16 10:53:01.161567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:01.407 [2024-12-16 10:53:01.161575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.407 [2024-12-16 10:53:01.161679] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:01.407 [2024-12-16 10:53:01.161690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:01.407 [2024-12-16 10:53:01.161700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.407 [2024-12-16 10:53:01.161710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.407 [2024-12-16 10:53:01.161718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:01.407 [2024-12-16 10:53:01.161725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:01.407 [2024-12-16 10:53:01.161733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:01.407 [2024-12-16 10:53:01.161742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:01.407 [2024-12-16 10:53:01.161751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:01.407 [2024-12-16 10:53:01.161759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.407 [2024-12-16 10:53:01.161766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:01.407 [2024-12-16 10:53:01.161774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:01.408 [2024-12-16 10:53:01.161781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.408 [2024-12-16 10:53:01.161790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:01.408 [2024-12-16 10:53:01.161800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:01.408 [2024-12-16 10:53:01.161808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.408 [2024-12-16 10:53:01.161816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:01.408 [2024-12-16 10:53:01.161826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:01.408 [2024-12-16 10:53:01.161835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.408 [2024-12-16 10:53:01.161843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:01.408 [2024-12-16 10:53:01.161850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:01.408 [2024-12-16 10:53:01.161858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.408 [2024-12-16 10:53:01.161866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:01.408 [2024-12-16 10:53:01.161873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:01.408 [2024-12-16 10:53:01.161882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.408 [2024-12-16 10:53:01.161889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:01.408 [2024-12-16 10:53:01.161897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:01.408 [2024-12-16 10:53:01.161904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.408 [2024-12-16 10:53:01.161912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:01.408 [2024-12-16 10:53:01.161919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:01.408 [2024-12-16 10:53:01.161945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.408 [2024-12-16 10:53:01.161954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:01.408 [2024-12-16 10:53:01.161962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:01.408 [2024-12-16 10:53:01.161969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.408 [2024-12-16 10:53:01.161977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:01.408 [2024-12-16 10:53:01.161984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:01.408 [2024-12-16 10:53:01.161992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.408 [2024-12-16 10:53:01.162000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:01.408 [2024-12-16 10:53:01.162007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:01.408 [2024-12-16 10:53:01.162015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.408 [2024-12-16 10:53:01.162023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:01.408 [2024-12-16 10:53:01.162030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:01.408 [2024-12-16 10:53:01.162039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.408 [2024-12-16 10:53:01.162047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:01.408 [2024-12-16 10:53:01.162055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:01.408 [2024-12-16 10:53:01.162064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.408 [2024-12-16 10:53:01.162075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.408 [2024-12-16 10:53:01.162083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:01.408 [2024-12-16 10:53:01.162090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:01.408 [2024-12-16 10:53:01.162098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:01.408 [2024-12-16 10:53:01.162105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:01.408 [2024-12-16 10:53:01.162112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:01.408 [2024-12-16 10:53:01.162119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:01.408 [2024-12-16 10:53:01.162128] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:01.408 [2024-12-16 10:53:01.162136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.408 [2024-12-16 10:53:01.162145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:01.408 [2024-12-16 10:53:01.162152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:01.408 [2024-12-16 10:53:01.162160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:01.408 [2024-12-16 10:53:01.162167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:01.408 [2024-12-16 10:53:01.162174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:01.408 [2024-12-16 10:53:01.162181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:01.408 [2024-12-16 10:53:01.162188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:01.408 [2024-12-16 10:53:01.162198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:01.408 [2024-12-16 10:53:01.162205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:01.408 [2024-12-16 10:53:01.162218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:01.408 [2024-12-16 10:53:01.162225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:01.408 [2024-12-16 10:53:01.162233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:01.408 [2024-12-16 10:53:01.162240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:01.408 [2024-12-16 10:53:01.162247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:01.408 [2024-12-16 10:53:01.162254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:01.408 [2024-12-16 10:53:01.162262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.408 [2024-12-16 10:53:01.162275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:01.408 [2024-12-16 10:53:01.162282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:01.408 [2024-12-16 10:53:01.162289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:01.408 [2024-12-16 10:53:01.162297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:01.408 [2024-12-16 10:53:01.162305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.162313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:01.408 [2024-12-16 10:53:01.162323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:21:01.408 [2024-12-16 10:53:01.162333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.185690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.185754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.408 [2024-12-16 10:53:01.185767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.312 ms 00:21:01.408 [2024-12-16 10:53:01.185776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.185880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.185893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:01.408 [2024-12-16 10:53:01.185903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:01.408 [2024-12-16 10:53:01.185914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.197891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.197962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.408 [2024-12-16 10:53:01.197974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.887 ms 00:21:01.408 [2024-12-16 10:53:01.197983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.198021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.198031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.408 [2024-12-16 10:53:01.198041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:01.408 [2024-12-16 10:53:01.198049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.198612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.198661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.408 [2024-12-16 10:53:01.198682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:21:01.408 [2024-12-16 10:53:01.198692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.198848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.198859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.408 [2024-12-16 10:53:01.198869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:21:01.408 [2024-12-16 10:53:01.198879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.205770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.205821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.408 [2024-12-16 10:53:01.205834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.865 ms 00:21:01.408 [2024-12-16 10:53:01.205842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.209626] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:01.408 [2024-12-16 10:53:01.209674] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:01.408 [2024-12-16 10:53:01.209686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.209700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:01.408 [2024-12-16 10:53:01.209709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.732 ms 00:21:01.408 [2024-12-16 10:53:01.209717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.408 [2024-12-16 10:53:01.225549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.408 [2024-12-16 10:53:01.225596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:01.408 [2024-12-16 10:53:01.225614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.781 ms 00:21:01.409 [2024-12-16 10:53:01.225622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.228481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.228669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:01.409 [2024-12-16 10:53:01.228688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:21:01.409 [2024-12-16 10:53:01.228696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.231222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.231264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:01.409 [2024-12-16 10:53:01.231274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.479 ms 00:21:01.409 [2024-12-16 10:53:01.231281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.231626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.231637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:01.409 [2024-12-16 10:53:01.231653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:21:01.409 [2024-12-16 10:53:01.231661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.250669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.250833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:01.409 [2024-12-16 10:53:01.250891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.992 ms 00:21:01.409 [2024-12-16 10:53:01.250915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.258404] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:01.409 [2024-12-16 10:53:01.260946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.261042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:01.409 [2024-12-16 10:53:01.261092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.948 ms 00:21:01.409 [2024-12-16 10:53:01.261114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.261188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.261214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:01.409 [2024-12-16 10:53:01.261236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:01.409 [2024-12-16 10:53:01.261255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.262638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.262736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:01.409 [2024-12-16 10:53:01.262781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:21:01.409 [2024-12-16 10:53:01.262807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.262849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.262875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:01.409 [2024-12-16 10:53:01.262895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:01.409 [2024-12-16 10:53:01.262913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.262975] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:01.409 [2024-12-16 10:53:01.263000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.263064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:01.409 [2024-12-16 10:53:01.263077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:01.409 [2024-12-16 10:53:01.263084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.266996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.267027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:01.409 [2024-12-16 10:53:01.267037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.887 ms 00:21:01.409 [2024-12-16 10:53:01.267045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.267118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.409 [2024-12-16 10:53:01.267127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:01.409 [2024-12-16 10:53:01.267135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:01.409 [2024-12-16 10:53:01.267143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.409 [2024-12-16 10:53:01.268103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.171 ms, result 0 00:21:02.794  [2024-12-16T10:53:03.727Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-16T10:53:04.668Z] Copying: 39/1024 [MB] (23 MBps) [2024-12-16T10:53:05.612Z] Copying: 56/1024 [MB] (16 MBps) [2024-12-16T10:53:06.556Z] Copying: 67/1024 [MB] (10 MBps) [2024-12-16T10:53:07.499Z] Copying: 77/1024 [MB] (10 MBps) [2024-12-16T10:53:08.885Z] Copying: 88/1024 [MB] (10 MBps) [2024-12-16T10:53:09.457Z] Copying: 99/1024 [MB] (10 MBps) [2024-12-16T10:53:10.925Z] Copying: 110/1024 [MB] (10 MBps) [2024-12-16T10:53:11.534Z] Copying: 120/1024 [MB] (10 MBps) [2024-12-16T10:53:12.478Z] Copying: 131/1024 [MB] (10 MBps) [2024-12-16T10:53:13.859Z] Copying: 141/1024 [MB] (10 MBps) [2024-12-16T10:53:14.802Z] Copying: 151/1024 [MB] (10 MBps) [2024-12-16T10:53:15.746Z] Copying: 162/1024 [MB] (10 MBps) [2024-12-16T10:53:16.691Z] Copying: 173/1024 [MB] (10 MBps) [2024-12-16T10:53:17.632Z] Copying: 183/1024 [MB] (10 MBps) [2024-12-16T10:53:18.578Z] Copying: 200/1024 [MB] (16 MBps) [2024-12-16T10:53:19.523Z] Copying: 213/1024 [MB] (13 MBps) [2024-12-16T10:53:20.469Z] Copying: 224/1024 [MB] (10 MBps) [2024-12-16T10:53:21.856Z] Copying: 242/1024 [MB] (18 MBps) [2024-12-16T10:53:22.802Z] Copying: 254/1024 [MB] (12 MBps) [2024-12-16T10:53:23.747Z] Copying: 271/1024 [MB] (16 MBps) [2024-12-16T10:53:24.692Z] Copying: 282/1024 [MB] (11 MBps) [2024-12-16T10:53:25.642Z] Copying: 303/1024 [MB] (20 MBps) [2024-12-16T10:53:26.587Z] Copying: 326/1024 [MB] (23 MBps) [2024-12-16T10:53:27.533Z] Copying: 348/1024 [MB] (22 MBps) [2024-12-16T10:53:28.478Z] Copying: 363/1024 [MB] (14 MBps) [2024-12-16T10:53:29.868Z] Copying: 386/1024 [MB] (23 MBps) [2024-12-16T10:53:30.812Z] Copying: 403/1024 [MB] (16 MBps) [2024-12-16T10:53:31.756Z] Copying: 424/1024 [MB] (21 MBps) [2024-12-16T10:53:32.701Z] Copying: 441/1024 [MB] (17 MBps) [2024-12-16T10:53:33.646Z] Copying: 464/1024 [MB] (22 MBps) [2024-12-16T10:53:34.588Z] Copying: 479/1024 [MB] (15 MBps) [2024-12-16T10:53:35.532Z] Copying: 495/1024 [MB] (16 MBps) [2024-12-16T10:53:36.476Z] Copying: 508/1024 [MB] (12 MBps) [2024-12-16T10:53:37.862Z] Copying: 525/1024 [MB] (16 MBps) [2024-12-16T10:53:38.805Z] Copying: 547/1024 [MB] (22 MBps) [2024-12-16T10:53:39.749Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-16T10:53:40.691Z] Copying: 569/1024 [MB] (11 MBps) [2024-12-16T10:53:41.636Z] Copying: 579/1024 [MB] (10 MBps) [2024-12-16T10:53:42.582Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-16T10:53:43.526Z] Copying: 601/1024 [MB] (10 MBps) [2024-12-16T10:53:44.468Z] Copying: 613/1024 [MB] (11 MBps) [2024-12-16T10:53:45.852Z] Copying: 624/1024 [MB] (11 MBps) [2024-12-16T10:53:46.799Z] Copying: 635/1024 [MB] (11 MBps) [2024-12-16T10:53:47.746Z] Copying: 648/1024 [MB] (12 MBps) [2024-12-16T10:53:48.747Z] Copying: 662/1024 [MB] (13 MBps) [2024-12-16T10:53:49.690Z] Copying: 680/1024 [MB] (18 MBps) [2024-12-16T10:53:50.636Z] Copying: 695/1024 [MB] (14 MBps) [2024-12-16T10:53:51.581Z] Copying: 712/1024 [MB] (17 MBps) [2024-12-16T10:53:52.525Z] Copying: 729/1024 [MB] (16 MBps) [2024-12-16T10:53:53.468Z] Copying: 750/1024 [MB] (20 MBps) [2024-12-16T10:53:54.855Z] Copying: 761/1024 [MB] (11 MBps) [2024-12-16T10:53:55.799Z] Copying: 777/1024 [MB] (16 MBps) [2024-12-16T10:53:56.751Z] Copying: 796/1024 [MB] (18 MBps) [2024-12-16T10:53:57.695Z] Copying: 807/1024 [MB] (11 MBps) [2024-12-16T10:53:58.640Z] Copying: 823/1024 [MB] (16 MBps) [2024-12-16T10:53:59.582Z] Copying: 838/1024 [MB] (14 MBps) [2024-12-16T10:54:00.526Z] Copying: 866/1024 [MB] (28 MBps) [2024-12-16T10:54:01.473Z] Copying: 878/1024 [MB] (11 MBps) [2024-12-16T10:54:02.866Z] Copying: 888/1024 [MB] (10 MBps) [2024-12-16T10:54:03.810Z] Copying: 907/1024 [MB] (18 MBps) [2024-12-16T10:54:04.750Z] Copying: 922/1024 [MB] (15 MBps) [2024-12-16T10:54:05.697Z] Copying: 949/1024 [MB] (26 MBps) [2024-12-16T10:54:06.639Z] Copying: 971/1024 [MB] (21 MBps) [2024-12-16T10:54:07.584Z] Copying: 995/1024 [MB] (23 MBps) [2024-12-16T10:54:07.844Z] Copying: 1017/1024 [MB] (22 MBps) [2024-12-16T10:54:08.415Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-16 10:54:08.179686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.426 [2024-12-16 10:54:08.179823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:08.426 [2024-12-16 10:54:08.179859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:08.426 [2024-12-16 10:54:08.179881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.426 [2024-12-16 10:54:08.179972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:08.426 [2024-12-16 10:54:08.181108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.426 [2024-12-16 10:54:08.181189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:08.426 [2024-12-16 10:54:08.181219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:22:08.426 [2024-12-16 10:54:08.181242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.426 [2024-12-16 10:54:08.181863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.426 [2024-12-16 10:54:08.181912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:08.426 [2024-12-16 10:54:08.181978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:22:08.426 [2024-12-16 10:54:08.182000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.426 [2024-12-16 10:54:08.189916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.426 [2024-12-16 10:54:08.189999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:08.426 [2024-12-16 10:54:08.190013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.878 ms 00:22:08.426 [2024-12-16 10:54:08.190033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.426 [2024-12-16 10:54:08.196671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.426 [2024-12-16 10:54:08.196769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:08.426 [2024-12-16 10:54:08.196787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.586 ms 00:22:08.426 [2024-12-16 10:54:08.196799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.426 [2024-12-16 10:54:08.200033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.426 [2024-12-16 10:54:08.200088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:08.426 [2024-12-16 10:54:08.200099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 00:22:08.426 [2024-12-16 10:54:08.200107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.426 [2024-12-16 10:54:08.204634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.426 [2024-12-16 10:54:08.204708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:08.426 [2024-12-16 10:54:08.204726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.476 ms 00:22:08.426 [2024-12-16 10:54:08.204739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.689 [2024-12-16 10:54:08.476259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.689 [2024-12-16 10:54:08.476513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:08.689 [2024-12-16 10:54:08.476553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 271.450 ms 00:22:08.689 [2024-12-16 10:54:08.476564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.689 [2024-12-16 10:54:08.479550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.689 [2024-12-16 10:54:08.479609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:08.689 [2024-12-16 10:54:08.479621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:22:08.689 [2024-12-16 10:54:08.479629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.689 [2024-12-16 10:54:08.481819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.689 [2024-12-16 10:54:08.482037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:08.689 [2024-12-16 10:54:08.482112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:22:08.689 [2024-12-16 10:54:08.482137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.689 [2024-12-16 10:54:08.484030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.689 [2024-12-16 10:54:08.484206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:08.689 [2024-12-16 10:54:08.484226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:22:08.689 [2024-12-16 10:54:08.484234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.689 [2024-12-16 10:54:08.486132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.689 [2024-12-16 10:54:08.486185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:08.689 [2024-12-16 10:54:08.486196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:22:08.689 [2024-12-16 10:54:08.486205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.689 [2024-12-16 10:54:08.486249] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:08.689 [2024-12-16 10:54:08.486276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:08.689 [2024-12-16 10:54:08.486288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:08.689 [2024-12-16 10:54:08.486727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.486998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:08.690 [2024-12-16 10:54:08.487116] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:08.690 [2024-12-16 10:54:08.487132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 642f8cb2-110e-4224-a571-58eb4e48d140 00:22:08.690 [2024-12-16 10:54:08.487141] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:08.690 [2024-12-16 10:54:08.487150] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 23744 00:22:08.690 [2024-12-16 10:54:08.487157] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 22784 00:22:08.690 [2024-12-16 10:54:08.487173] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0421 00:22:08.690 [2024-12-16 10:54:08.487184] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:08.690 [2024-12-16 10:54:08.487192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:08.690 [2024-12-16 10:54:08.487201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:08.690 [2024-12-16 10:54:08.487207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:08.690 [2024-12-16 10:54:08.487215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:08.690 [2024-12-16 10:54:08.487223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.690 [2024-12-16 10:54:08.487230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:08.690 [2024-12-16 10:54:08.487240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:22:08.690 [2024-12-16 10:54:08.487248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.489770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.690 [2024-12-16 10:54:08.489806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:08.690 [2024-12-16 10:54:08.489820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:22:08.690 [2024-12-16 10:54:08.489828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.489972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.690 [2024-12-16 10:54:08.489982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:08.690 [2024-12-16 10:54:08.489991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:22:08.690 [2024-12-16 10:54:08.490003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.497349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.690 [2024-12-16 10:54:08.497509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.690 [2024-12-16 10:54:08.497565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.690 [2024-12-16 10:54:08.497590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.497676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.690 [2024-12-16 10:54:08.497699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.690 [2024-12-16 10:54:08.497767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.690 [2024-12-16 10:54:08.497796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.497857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.690 [2024-12-16 10:54:08.497889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.690 [2024-12-16 10:54:08.497914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.690 [2024-12-16 10:54:08.498121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.498185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.690 [2024-12-16 10:54:08.498223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.690 [2024-12-16 10:54:08.498246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.690 [2024-12-16 10:54:08.498266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.511457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.690 [2024-12-16 10:54:08.511625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.690 [2024-12-16 10:54:08.511679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.690 [2024-12-16 10:54:08.511704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.522773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.690 [2024-12-16 10:54:08.522968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.690 [2024-12-16 10:54:08.523035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.690 [2024-12-16 10:54:08.523060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.690 [2024-12-16 10:54:08.523137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.690 [2024-12-16 10:54:08.523161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.691 [2024-12-16 10:54:08.523230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.691 [2024-12-16 10:54:08.523253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.691 [2024-12-16 10:54:08.523302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.691 [2024-12-16 10:54:08.523367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.691 [2024-12-16 10:54:08.523389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.691 [2024-12-16 10:54:08.523534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.691 [2024-12-16 10:54:08.523686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.691 [2024-12-16 10:54:08.523759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.691 [2024-12-16 10:54:08.524332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.691 [2024-12-16 10:54:08.524441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.691 [2024-12-16 10:54:08.524532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.691 [2024-12-16 10:54:08.524747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:08.691 [2024-12-16 10:54:08.524853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.691 [2024-12-16 10:54:08.524870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.691 [2024-12-16 10:54:08.524952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.691 [2024-12-16 10:54:08.524966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.691 [2024-12-16 10:54:08.524975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.691 [2024-12-16 10:54:08.524991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.691 [2024-12-16 10:54:08.525040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.691 [2024-12-16 10:54:08.525051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.691 [2024-12-16 10:54:08.525060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.691 [2024-12-16 10:54:08.525068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.691 [2024-12-16 10:54:08.525204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.514 ms, result 0 00:22:08.953 00:22:08.953 00:22:08.953 10:54:08 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:11.504 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:11.504 10:54:10 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:11.504 10:54:10 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:11.504 10:54:10 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:11.504 Process with pid 85754 is not found 00:22:11.504 Remove shared memory files 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 85754 00:22:11.504 10:54:11 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 85754 ']' 00:22:11.504 10:54:11 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 85754 00:22:11.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85754) - No such process 00:22:11.504 10:54:11 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 85754 is not found' 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:11.504 10:54:11 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:11.504 ************************************ 00:22:11.504 END TEST ftl_restore 00:22:11.504 ************************************ 00:22:11.504 00:22:11.504 real 4m49.228s 00:22:11.504 user 4m37.722s 00:22:11.504 sys 0m10.922s 00:22:11.504 10:54:11 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:11.504 10:54:11 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:11.504 10:54:11 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:11.504 10:54:11 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:11.504 10:54:11 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:11.504 10:54:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:11.504 ************************************ 00:22:11.504 START TEST ftl_dirty_shutdown 00:22:11.504 ************************************ 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:11.504 * Looking for test storage... 00:22:11.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:11.504 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:11.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.505 --rc genhtml_branch_coverage=1 00:22:11.505 --rc genhtml_function_coverage=1 00:22:11.505 --rc genhtml_legend=1 00:22:11.505 --rc geninfo_all_blocks=1 00:22:11.505 --rc geninfo_unexecuted_blocks=1 00:22:11.505 00:22:11.505 ' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:11.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.505 --rc genhtml_branch_coverage=1 00:22:11.505 --rc genhtml_function_coverage=1 00:22:11.505 --rc genhtml_legend=1 00:22:11.505 --rc geninfo_all_blocks=1 00:22:11.505 --rc geninfo_unexecuted_blocks=1 00:22:11.505 00:22:11.505 ' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:11.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.505 --rc genhtml_branch_coverage=1 00:22:11.505 --rc genhtml_function_coverage=1 00:22:11.505 --rc genhtml_legend=1 00:22:11.505 --rc geninfo_all_blocks=1 00:22:11.505 --rc geninfo_unexecuted_blocks=1 00:22:11.505 00:22:11.505 ' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:11.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.505 --rc genhtml_branch_coverage=1 00:22:11.505 --rc genhtml_function_coverage=1 00:22:11.505 --rc genhtml_legend=1 00:22:11.505 --rc geninfo_all_blocks=1 00:22:11.505 --rc geninfo_unexecuted_blocks=1 00:22:11.505 00:22:11.505 ' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88834 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88834 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 88834 ']' 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.505 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.506 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.506 10:54:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:11.506 [2024-12-16 10:54:11.449404] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:11.506 [2024-12-16 10:54:11.449790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88834 ] 00:22:11.767 [2024-12-16 10:54:11.586875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.767 [2024-12-16 10:54:11.639127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.339 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.339 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:12.339 10:54:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:12.339 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:12.339 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:12.340 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:12.340 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:12.340 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:12.913 { 00:22:12.913 "name": "nvme0n1", 00:22:12.913 "aliases": [ 00:22:12.913 "defc4a3e-2bc4-449b-bc2b-7f64b62d061b" 00:22:12.913 ], 00:22:12.913 "product_name": "NVMe disk", 00:22:12.913 "block_size": 4096, 00:22:12.913 "num_blocks": 1310720, 00:22:12.913 "uuid": "defc4a3e-2bc4-449b-bc2b-7f64b62d061b", 00:22:12.913 "numa_id": -1, 00:22:12.913 "assigned_rate_limits": { 00:22:12.913 "rw_ios_per_sec": 0, 00:22:12.913 "rw_mbytes_per_sec": 0, 00:22:12.913 "r_mbytes_per_sec": 0, 00:22:12.913 "w_mbytes_per_sec": 0 00:22:12.913 }, 00:22:12.913 "claimed": true, 00:22:12.913 "claim_type": "read_many_write_one", 00:22:12.913 "zoned": false, 00:22:12.913 "supported_io_types": { 00:22:12.913 "read": true, 00:22:12.913 "write": true, 00:22:12.913 "unmap": true, 00:22:12.913 "flush": true, 00:22:12.913 "reset": true, 00:22:12.913 "nvme_admin": true, 00:22:12.913 "nvme_io": true, 00:22:12.913 "nvme_io_md": false, 00:22:12.913 "write_zeroes": true, 00:22:12.913 "zcopy": false, 00:22:12.913 "get_zone_info": false, 00:22:12.913 "zone_management": false, 00:22:12.913 "zone_append": false, 00:22:12.913 "compare": true, 00:22:12.913 "compare_and_write": false, 00:22:12.913 "abort": true, 00:22:12.913 "seek_hole": false, 00:22:12.913 "seek_data": false, 00:22:12.913 "copy": true, 00:22:12.913 "nvme_iov_md": false 00:22:12.913 }, 00:22:12.913 "driver_specific": { 00:22:12.913 "nvme": [ 00:22:12.913 { 00:22:12.913 "pci_address": "0000:00:11.0", 00:22:12.913 "trid": { 00:22:12.913 "trtype": "PCIe", 00:22:12.913 "traddr": "0000:00:11.0" 00:22:12.913 }, 00:22:12.913 "ctrlr_data": { 00:22:12.913 "cntlid": 0, 00:22:12.913 "vendor_id": "0x1b36", 00:22:12.913 "model_number": "QEMU NVMe Ctrl", 00:22:12.913 "serial_number": "12341", 00:22:12.913 "firmware_revision": "8.0.0", 00:22:12.913 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:12.913 "oacs": { 00:22:12.913 "security": 0, 00:22:12.913 "format": 1, 00:22:12.913 "firmware": 0, 00:22:12.913 "ns_manage": 1 00:22:12.913 }, 00:22:12.913 "multi_ctrlr": false, 00:22:12.913 "ana_reporting": false 00:22:12.913 }, 00:22:12.913 "vs": { 00:22:12.913 "nvme_version": "1.4" 00:22:12.913 }, 00:22:12.913 "ns_data": { 00:22:12.913 "id": 1, 00:22:12.913 "can_share": false 00:22:12.913 } 00:22:12.913 } 00:22:12.913 ], 00:22:12.913 "mp_policy": "active_passive" 00:22:12.913 } 00:22:12.913 } 00:22:12.913 ]' 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:12.913 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:13.219 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:13.219 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:13.219 10:54:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:13.219 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:13.219 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:13.219 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:13.219 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:13.219 10:54:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:13.219 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=29deaba0-0089-4f69-85b6-c4b4929aa1dc 00:22:13.219 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:13.219 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 29deaba0-0089-4f69-85b6-c4b4929aa1dc 00:22:13.504 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:13.766 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=16835df1-8efd-47bd-b243-c9b0c173475e 00:22:13.766 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 16835df1-8efd-47bd-b243-c9b0c173475e 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:14.028 10:54:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.287 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:14.287 { 00:22:14.287 "name": "a4bc17ba-6316-418a-be38-84deca3dab0f", 00:22:14.287 "aliases": [ 00:22:14.287 "lvs/nvme0n1p0" 00:22:14.287 ], 00:22:14.287 "product_name": "Logical Volume", 00:22:14.287 "block_size": 4096, 00:22:14.287 "num_blocks": 26476544, 00:22:14.287 "uuid": "a4bc17ba-6316-418a-be38-84deca3dab0f", 00:22:14.287 "assigned_rate_limits": { 00:22:14.287 "rw_ios_per_sec": 0, 00:22:14.287 "rw_mbytes_per_sec": 0, 00:22:14.287 "r_mbytes_per_sec": 0, 00:22:14.287 "w_mbytes_per_sec": 0 00:22:14.287 }, 00:22:14.287 "claimed": false, 00:22:14.287 "zoned": false, 00:22:14.288 "supported_io_types": { 00:22:14.288 "read": true, 00:22:14.288 "write": true, 00:22:14.288 "unmap": true, 00:22:14.288 "flush": false, 00:22:14.288 "reset": true, 00:22:14.288 "nvme_admin": false, 00:22:14.288 "nvme_io": false, 00:22:14.288 "nvme_io_md": false, 00:22:14.288 "write_zeroes": true, 00:22:14.288 "zcopy": false, 00:22:14.288 "get_zone_info": false, 00:22:14.288 "zone_management": false, 00:22:14.288 "zone_append": false, 00:22:14.288 "compare": false, 00:22:14.288 "compare_and_write": false, 00:22:14.288 "abort": false, 00:22:14.288 "seek_hole": true, 00:22:14.288 "seek_data": true, 00:22:14.288 "copy": false, 00:22:14.288 "nvme_iov_md": false 00:22:14.288 }, 00:22:14.288 "driver_specific": { 00:22:14.288 "lvol": { 00:22:14.288 "lvol_store_uuid": "16835df1-8efd-47bd-b243-c9b0c173475e", 00:22:14.288 "base_bdev": "nvme0n1", 00:22:14.288 "thin_provision": true, 00:22:14.288 "num_allocated_clusters": 0, 00:22:14.288 "snapshot": false, 00:22:14.288 "clone": false, 00:22:14.288 "esnap_clone": false 00:22:14.288 } 00:22:14.288 } 00:22:14.288 } 00:22:14.288 ]' 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:14.288 10:54:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:14.546 10:54:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:14.546 10:54:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:14.546 10:54:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.546 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.546 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:14.546 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:14.546 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:14.546 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:14.805 { 00:22:14.805 "name": "a4bc17ba-6316-418a-be38-84deca3dab0f", 00:22:14.805 "aliases": [ 00:22:14.805 "lvs/nvme0n1p0" 00:22:14.805 ], 00:22:14.805 "product_name": "Logical Volume", 00:22:14.805 "block_size": 4096, 00:22:14.805 "num_blocks": 26476544, 00:22:14.805 "uuid": "a4bc17ba-6316-418a-be38-84deca3dab0f", 00:22:14.805 "assigned_rate_limits": { 00:22:14.805 "rw_ios_per_sec": 0, 00:22:14.805 "rw_mbytes_per_sec": 0, 00:22:14.805 "r_mbytes_per_sec": 0, 00:22:14.805 "w_mbytes_per_sec": 0 00:22:14.805 }, 00:22:14.805 "claimed": false, 00:22:14.805 "zoned": false, 00:22:14.805 "supported_io_types": { 00:22:14.805 "read": true, 00:22:14.805 "write": true, 00:22:14.805 "unmap": true, 00:22:14.805 "flush": false, 00:22:14.805 "reset": true, 00:22:14.805 "nvme_admin": false, 00:22:14.805 "nvme_io": false, 00:22:14.805 "nvme_io_md": false, 00:22:14.805 "write_zeroes": true, 00:22:14.805 "zcopy": false, 00:22:14.805 "get_zone_info": false, 00:22:14.805 "zone_management": false, 00:22:14.805 "zone_append": false, 00:22:14.805 "compare": false, 00:22:14.805 "compare_and_write": false, 00:22:14.805 "abort": false, 00:22:14.805 "seek_hole": true, 00:22:14.805 "seek_data": true, 00:22:14.805 "copy": false, 00:22:14.805 "nvme_iov_md": false 00:22:14.805 }, 00:22:14.805 "driver_specific": { 00:22:14.805 "lvol": { 00:22:14.805 "lvol_store_uuid": "16835df1-8efd-47bd-b243-c9b0c173475e", 00:22:14.805 "base_bdev": "nvme0n1", 00:22:14.805 "thin_provision": true, 00:22:14.805 "num_allocated_clusters": 0, 00:22:14.805 "snapshot": false, 00:22:14.805 "clone": false, 00:22:14.805 "esnap_clone": false 00:22:14.805 } 00:22:14.805 } 00:22:14.805 } 00:22:14.805 ]' 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:14.805 10:54:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:15.064 10:54:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:15.064 10:54:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:15.064 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:15.064 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:15.064 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:15.064 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:15.064 10:54:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a4bc17ba-6316-418a-be38-84deca3dab0f 00:22:15.064 10:54:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:15.064 { 00:22:15.064 "name": "a4bc17ba-6316-418a-be38-84deca3dab0f", 00:22:15.064 "aliases": [ 00:22:15.064 "lvs/nvme0n1p0" 00:22:15.064 ], 00:22:15.064 "product_name": "Logical Volume", 00:22:15.064 "block_size": 4096, 00:22:15.064 "num_blocks": 26476544, 00:22:15.064 "uuid": "a4bc17ba-6316-418a-be38-84deca3dab0f", 00:22:15.064 "assigned_rate_limits": { 00:22:15.064 "rw_ios_per_sec": 0, 00:22:15.064 "rw_mbytes_per_sec": 0, 00:22:15.064 "r_mbytes_per_sec": 0, 00:22:15.064 "w_mbytes_per_sec": 0 00:22:15.064 }, 00:22:15.064 "claimed": false, 00:22:15.064 "zoned": false, 00:22:15.064 "supported_io_types": { 00:22:15.064 "read": true, 00:22:15.064 "write": true, 00:22:15.064 "unmap": true, 00:22:15.064 "flush": false, 00:22:15.064 "reset": true, 00:22:15.064 "nvme_admin": false, 00:22:15.064 "nvme_io": false, 00:22:15.064 "nvme_io_md": false, 00:22:15.064 "write_zeroes": true, 00:22:15.064 "zcopy": false, 00:22:15.064 "get_zone_info": false, 00:22:15.064 "zone_management": false, 00:22:15.064 "zone_append": false, 00:22:15.064 "compare": false, 00:22:15.064 "compare_and_write": false, 00:22:15.064 "abort": false, 00:22:15.064 "seek_hole": true, 00:22:15.064 "seek_data": true, 00:22:15.064 "copy": false, 00:22:15.064 "nvme_iov_md": false 00:22:15.064 }, 00:22:15.064 "driver_specific": { 00:22:15.064 "lvol": { 00:22:15.064 "lvol_store_uuid": "16835df1-8efd-47bd-b243-c9b0c173475e", 00:22:15.064 "base_bdev": "nvme0n1", 00:22:15.064 "thin_provision": true, 00:22:15.064 "num_allocated_clusters": 0, 00:22:15.064 "snapshot": false, 00:22:15.064 "clone": false, 00:22:15.064 "esnap_clone": false 00:22:15.064 } 00:22:15.064 } 00:22:15.064 } 00:22:15.064 ]' 00:22:15.064 10:54:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a4bc17ba-6316-418a-be38-84deca3dab0f --l2p_dram_limit 10' 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:15.323 10:54:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a4bc17ba-6316-418a-be38-84deca3dab0f --l2p_dram_limit 10 -c nvc0n1p0 00:22:15.323 [2024-12-16 10:54:15.289191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.323 [2024-12-16 10:54:15.289231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:15.323 [2024-12-16 10:54:15.289242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:15.323 [2024-12-16 10:54:15.289250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.323 [2024-12-16 10:54:15.289294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.323 [2024-12-16 10:54:15.289302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.323 [2024-12-16 10:54:15.289309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:15.323 [2024-12-16 10:54:15.289321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.323 [2024-12-16 10:54:15.289342] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:15.323 [2024-12-16 10:54:15.289549] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:15.323 [2024-12-16 10:54:15.289561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.323 [2024-12-16 10:54:15.289569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.323 [2024-12-16 10:54:15.289577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:22:15.323 [2024-12-16 10:54:15.289585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.323 [2024-12-16 10:54:15.289636] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 12c7a348-302f-4b33-b558-f5d26dfceb2c 00:22:15.323 [2024-12-16 10:54:15.290568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.323 [2024-12-16 10:54:15.290589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:15.323 [2024-12-16 10:54:15.290598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:15.323 [2024-12-16 10:54:15.290607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.323 [2024-12-16 10:54:15.295269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.323 [2024-12-16 10:54:15.295293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.323 [2024-12-16 10:54:15.295302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.628 ms 00:22:15.323 [2024-12-16 10:54:15.295308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.323 [2024-12-16 10:54:15.295366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.323 [2024-12-16 10:54:15.295373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.323 [2024-12-16 10:54:15.295381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:15.323 [2024-12-16 10:54:15.295391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.323 [2024-12-16 10:54:15.295428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.324 [2024-12-16 10:54:15.295436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:15.324 [2024-12-16 10:54:15.295443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:15.324 [2024-12-16 10:54:15.295449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.324 [2024-12-16 10:54:15.295469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:15.324 [2024-12-16 10:54:15.296727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.324 [2024-12-16 10:54:15.296753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.324 [2024-12-16 10:54:15.296765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:22:15.324 [2024-12-16 10:54:15.296773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.324 [2024-12-16 10:54:15.296797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.324 [2024-12-16 10:54:15.296805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:15.324 [2024-12-16 10:54:15.296814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:15.324 [2024-12-16 10:54:15.296822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.324 [2024-12-16 10:54:15.296840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:15.324 [2024-12-16 10:54:15.296960] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:15.324 [2024-12-16 10:54:15.296970] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:15.324 [2024-12-16 10:54:15.296985] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:15.324 [2024-12-16 10:54:15.296994] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297002] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297007] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:15.324 [2024-12-16 10:54:15.297017] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:15.324 [2024-12-16 10:54:15.297022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:15.324 [2024-12-16 10:54:15.297139] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:15.324 [2024-12-16 10:54:15.297148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.324 [2024-12-16 10:54:15.297155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:15.324 [2024-12-16 10:54:15.297164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:22:15.324 [2024-12-16 10:54:15.297171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.324 [2024-12-16 10:54:15.297234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.324 [2024-12-16 10:54:15.297244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:15.324 [2024-12-16 10:54:15.297250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:15.324 [2024-12-16 10:54:15.297257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.324 [2024-12-16 10:54:15.297330] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:15.324 [2024-12-16 10:54:15.297342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:15.324 [2024-12-16 10:54:15.297348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:15.324 [2024-12-16 10:54:15.297368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:15.324 [2024-12-16 10:54:15.297388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.324 [2024-12-16 10:54:15.297401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:15.324 [2024-12-16 10:54:15.297408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:15.324 [2024-12-16 10:54:15.297414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.324 [2024-12-16 10:54:15.297423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:15.324 [2024-12-16 10:54:15.297429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:15.324 [2024-12-16 10:54:15.297437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:15.324 [2024-12-16 10:54:15.297450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:15.324 [2024-12-16 10:54:15.297469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:15.324 [2024-12-16 10:54:15.297488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:15.324 [2024-12-16 10:54:15.297507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:15.324 [2024-12-16 10:54:15.297530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:15.324 [2024-12-16 10:54:15.297549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.324 [2024-12-16 10:54:15.297562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:15.324 [2024-12-16 10:54:15.297569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:15.324 [2024-12-16 10:54:15.297574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.324 [2024-12-16 10:54:15.297581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:15.324 [2024-12-16 10:54:15.297587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:15.324 [2024-12-16 10:54:15.297594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:15.324 [2024-12-16 10:54:15.297609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:15.324 [2024-12-16 10:54:15.297614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297620] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:15.324 [2024-12-16 10:54:15.297632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:15.324 [2024-12-16 10:54:15.297644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.324 [2024-12-16 10:54:15.297658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:15.324 [2024-12-16 10:54:15.297665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:15.324 [2024-12-16 10:54:15.297671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:15.324 [2024-12-16 10:54:15.297677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:15.324 [2024-12-16 10:54:15.297685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:15.324 [2024-12-16 10:54:15.297692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:15.324 [2024-12-16 10:54:15.297701] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:15.324 [2024-12-16 10:54:15.297709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.324 [2024-12-16 10:54:15.297718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:15.324 [2024-12-16 10:54:15.297725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:15.324 [2024-12-16 10:54:15.297732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:15.324 [2024-12-16 10:54:15.297738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:15.324 [2024-12-16 10:54:15.297747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:15.324 [2024-12-16 10:54:15.297753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:15.324 [2024-12-16 10:54:15.297761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:15.324 [2024-12-16 10:54:15.297767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:15.324 [2024-12-16 10:54:15.297773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:15.324 [2024-12-16 10:54:15.297778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:15.325 [2024-12-16 10:54:15.297784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:15.325 [2024-12-16 10:54:15.297789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:15.325 [2024-12-16 10:54:15.297796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:15.325 [2024-12-16 10:54:15.297801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:15.325 [2024-12-16 10:54:15.297807] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:15.325 [2024-12-16 10:54:15.297815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.325 [2024-12-16 10:54:15.297822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:15.325 [2024-12-16 10:54:15.297828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:15.325 [2024-12-16 10:54:15.297835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:15.325 [2024-12-16 10:54:15.297840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:15.325 [2024-12-16 10:54:15.297847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.325 [2024-12-16 10:54:15.297853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:15.325 [2024-12-16 10:54:15.297862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:22:15.325 [2024-12-16 10:54:15.297867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.325 [2024-12-16 10:54:15.297897] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:15.325 [2024-12-16 10:54:15.297904] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:18.617 [2024-12-16 10:54:18.381110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.381164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:18.617 [2024-12-16 10:54:18.381180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3083.198 ms 00:22:18.617 [2024-12-16 10:54:18.381187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.388912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.388964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.617 [2024-12-16 10:54:18.388980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.653 ms 00:22:18.617 [2024-12-16 10:54:18.388987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.389055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.389062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:18.617 [2024-12-16 10:54:18.389072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:18.617 [2024-12-16 10:54:18.389078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.396016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.396045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.617 [2024-12-16 10:54:18.396054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:22:18.617 [2024-12-16 10:54:18.396060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.396082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.396089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.617 [2024-12-16 10:54:18.396099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:18.617 [2024-12-16 10:54:18.396105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.396395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.396408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.617 [2024-12-16 10:54:18.396416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:22:18.617 [2024-12-16 10:54:18.396422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.396510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.396521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.617 [2024-12-16 10:54:18.396529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:18.617 [2024-12-16 10:54:18.396538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.409681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.409729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.617 [2024-12-16 10:54:18.409749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.121 ms 00:22:18.617 [2024-12-16 10:54:18.409760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.421007] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:18.617 [2024-12-16 10:54:18.423303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.423331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:18.617 [2024-12-16 10:54:18.423340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.429 ms 00:22:18.617 [2024-12-16 10:54:18.423348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.476207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.476367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:18.617 [2024-12-16 10:54:18.476382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.839 ms 00:22:18.617 [2024-12-16 10:54:18.476392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.476538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.476550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:18.617 [2024-12-16 10:54:18.476557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:22:18.617 [2024-12-16 10:54:18.476565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.480143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.480174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:18.617 [2024-12-16 10:54:18.480185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:22:18.617 [2024-12-16 10:54:18.480194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.483059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.483169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:18.617 [2024-12-16 10:54:18.483182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.834 ms 00:22:18.617 [2024-12-16 10:54:18.483189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.483438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.483448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:18.617 [2024-12-16 10:54:18.483455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:22:18.617 [2024-12-16 10:54:18.483467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.512161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.512194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:18.617 [2024-12-16 10:54:18.512202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.679 ms 00:22:18.617 [2024-12-16 10:54:18.512213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.516139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.516169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:18.617 [2024-12-16 10:54:18.516177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.886 ms 00:22:18.617 [2024-12-16 10:54:18.516186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.519667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.519697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:18.617 [2024-12-16 10:54:18.519704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:22:18.617 [2024-12-16 10:54:18.519711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.523117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.523154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:18.617 [2024-12-16 10:54:18.523162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:22:18.617 [2024-12-16 10:54:18.523172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.523203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.523212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:18.617 [2024-12-16 10:54:18.523218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:18.617 [2024-12-16 10:54:18.523226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.523275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.617 [2024-12-16 10:54:18.523283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:18.617 [2024-12-16 10:54:18.523289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:18.617 [2024-12-16 10:54:18.523295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.617 [2024-12-16 10:54:18.524001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3234.468 ms, result 0 00:22:18.617 { 00:22:18.617 "name": "ftl0", 00:22:18.617 "uuid": "12c7a348-302f-4b33-b558-f5d26dfceb2c" 00:22:18.617 } 00:22:18.617 10:54:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:18.617 10:54:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:18.875 10:54:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:18.875 10:54:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:18.875 10:54:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:19.134 /dev/nbd0 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:19.134 1+0 records in 00:22:19.134 1+0 records out 00:22:19.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354571 s, 11.6 MB/s 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:19.134 10:54:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:19.134 [2024-12-16 10:54:19.042574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:19.134 [2024-12-16 10:54:19.042794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88978 ] 00:22:19.392 [2024-12-16 10:54:19.177966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.392 [2024-12-16 10:54:19.220641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.326  [2024-12-16T10:54:21.688Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-16T10:54:22.621Z] Copying: 389/1024 [MB] (194 MBps) [2024-12-16T10:54:23.556Z] Copying: 603/1024 [MB] (214 MBps) [2024-12-16T10:54:24.123Z] Copying: 860/1024 [MB] (256 MBps) [2024-12-16T10:54:24.123Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:22:24.134 00:22:24.392 10:54:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:26.290 10:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:26.290 [2024-12-16 10:54:26.229306] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:26.290 [2024-12-16 10:54:26.229401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89054 ] 00:22:26.549 [2024-12-16 10:54:26.364532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.549 [2024-12-16 10:54:26.406639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.489  [2024-12-16T10:54:28.865Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-16T10:54:29.803Z] Copying: 32/1024 [MB] (17 MBps) [2024-12-16T10:54:30.745Z] Copying: 52/1024 [MB] (20 MBps) [2024-12-16T10:54:31.687Z] Copying: 69/1024 [MB] (16 MBps) [2024-12-16T10:54:32.629Z] Copying: 85/1024 [MB] (16 MBps) [2024-12-16T10:54:33.579Z] Copying: 101/1024 [MB] (15 MBps) [2024-12-16T10:54:34.518Z] Copying: 119/1024 [MB] (18 MBps) [2024-12-16T10:54:35.906Z] Copying: 138/1024 [MB] (18 MBps) [2024-12-16T10:54:36.477Z] Copying: 152/1024 [MB] (14 MBps) [2024-12-16T10:54:37.863Z] Copying: 171/1024 [MB] (18 MBps) [2024-12-16T10:54:38.804Z] Copying: 185/1024 [MB] (14 MBps) [2024-12-16T10:54:39.744Z] Copying: 202/1024 [MB] (16 MBps) [2024-12-16T10:54:40.684Z] Copying: 220/1024 [MB] (18 MBps) [2024-12-16T10:54:41.626Z] Copying: 237/1024 [MB] (16 MBps) [2024-12-16T10:54:42.571Z] Copying: 251/1024 [MB] (14 MBps) [2024-12-16T10:54:43.515Z] Copying: 263/1024 [MB] (12 MBps) [2024-12-16T10:54:44.522Z] Copying: 278/1024 [MB] (14 MBps) [2024-12-16T10:54:45.908Z] Copying: 299/1024 [MB] (21 MBps) [2024-12-16T10:54:46.480Z] Copying: 313/1024 [MB] (14 MBps) [2024-12-16T10:54:47.864Z] Copying: 328/1024 [MB] (15 MBps) [2024-12-16T10:54:48.809Z] Copying: 345/1024 [MB] (16 MBps) [2024-12-16T10:54:49.754Z] Copying: 360/1024 [MB] (15 MBps) [2024-12-16T10:54:50.698Z] Copying: 375/1024 [MB] (14 MBps) [2024-12-16T10:54:51.659Z] Copying: 385/1024 [MB] (10 MBps) [2024-12-16T10:54:52.604Z] Copying: 395/1024 [MB] (10 MBps) [2024-12-16T10:54:53.548Z] Copying: 415440/1048576 [kB] (10012 kBps) [2024-12-16T10:54:54.491Z] Copying: 422/1024 [MB] (16 MBps) [2024-12-16T10:54:55.880Z] Copying: 435/1024 [MB] (12 MBps) [2024-12-16T10:54:56.823Z] Copying: 447/1024 [MB] (11 MBps) [2024-12-16T10:54:57.767Z] Copying: 457/1024 [MB] (10 MBps) [2024-12-16T10:54:58.712Z] Copying: 469/1024 [MB] (11 MBps) [2024-12-16T10:54:59.658Z] Copying: 490144/1048576 [kB] (9408 kBps) [2024-12-16T10:55:00.601Z] Copying: 499776/1048576 [kB] (9632 kBps) [2024-12-16T10:55:01.543Z] Copying: 498/1024 [MB] (10 MBps) [2024-12-16T10:55:02.484Z] Copying: 509/1024 [MB] (11 MBps) [2024-12-16T10:55:03.871Z] Copying: 520/1024 [MB] (10 MBps) [2024-12-16T10:55:04.814Z] Copying: 532/1024 [MB] (12 MBps) [2024-12-16T10:55:05.757Z] Copying: 543/1024 [MB] (11 MBps) [2024-12-16T10:55:06.702Z] Copying: 559/1024 [MB] (15 MBps) [2024-12-16T10:55:07.648Z] Copying: 570/1024 [MB] (11 MBps) [2024-12-16T10:55:08.592Z] Copying: 584/1024 [MB] (13 MBps) [2024-12-16T10:55:09.534Z] Copying: 594/1024 [MB] (10 MBps) [2024-12-16T10:55:10.477Z] Copying: 613/1024 [MB] (18 MBps) [2024-12-16T10:55:11.864Z] Copying: 630/1024 [MB] (17 MBps) [2024-12-16T10:55:12.865Z] Copying: 645/1024 [MB] (15 MBps) [2024-12-16T10:55:13.810Z] Copying: 658/1024 [MB] (13 MBps) [2024-12-16T10:55:14.754Z] Copying: 673/1024 [MB] (14 MBps) [2024-12-16T10:55:15.699Z] Copying: 686/1024 [MB] (12 MBps) [2024-12-16T10:55:16.641Z] Copying: 703/1024 [MB] (16 MBps) [2024-12-16T10:55:17.583Z] Copying: 718/1024 [MB] (15 MBps) [2024-12-16T10:55:18.525Z] Copying: 730/1024 [MB] (11 MBps) [2024-12-16T10:55:19.912Z] Copying: 746/1024 [MB] (15 MBps) [2024-12-16T10:55:20.486Z] Copying: 760/1024 [MB] (13 MBps) [2024-12-16T10:55:21.873Z] Copying: 776/1024 [MB] (15 MBps) [2024-12-16T10:55:22.813Z] Copying: 791/1024 [MB] (15 MBps) [2024-12-16T10:55:23.756Z] Copying: 806/1024 [MB] (14 MBps) [2024-12-16T10:55:24.699Z] Copying: 823/1024 [MB] (17 MBps) [2024-12-16T10:55:25.647Z] Copying: 839/1024 [MB] (16 MBps) [2024-12-16T10:55:26.588Z] Copying: 853/1024 [MB] (13 MBps) [2024-12-16T10:55:27.521Z] Copying: 869/1024 [MB] (15 MBps) [2024-12-16T10:55:28.904Z] Copying: 898/1024 [MB] (29 MBps) [2024-12-16T10:55:29.478Z] Copying: 924/1024 [MB] (25 MBps) [2024-12-16T10:55:30.864Z] Copying: 939/1024 [MB] (15 MBps) [2024-12-16T10:55:31.808Z] Copying: 956/1024 [MB] (16 MBps) [2024-12-16T10:55:32.749Z] Copying: 972/1024 [MB] (16 MBps) [2024-12-16T10:55:33.692Z] Copying: 991/1024 [MB] (19 MBps) [2024-12-16T10:55:34.264Z] Copying: 1010/1024 [MB] (19 MBps) [2024-12-16T10:55:34.525Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:23:34.536 00:23:34.536 10:55:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:34.536 10:55:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:34.796 10:55:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:35.058 [2024-12-16 10:55:34.831880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.058 [2024-12-16 10:55:34.831965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:35.058 [2024-12-16 10:55:34.831984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:35.058 [2024-12-16 10:55:34.831993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.058 [2024-12-16 10:55:34.832024] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:35.058 [2024-12-16 10:55:34.832796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.058 [2024-12-16 10:55:34.832841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:35.058 [2024-12-16 10:55:34.832852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:23:35.058 [2024-12-16 10:55:34.832871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.058 [2024-12-16 10:55:34.835761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.058 [2024-12-16 10:55:34.835819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:35.058 [2024-12-16 10:55:34.835831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.860 ms 00:23:35.058 [2024-12-16 10:55:34.835842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.058 [2024-12-16 10:55:34.855406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.058 [2024-12-16 10:55:34.855440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:35.058 [2024-12-16 10:55:34.855450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.547 ms 00:23:35.058 [2024-12-16 10:55:34.855459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.058 [2024-12-16 10:55:34.861643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.058 [2024-12-16 10:55:34.861672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:35.058 [2024-12-16 10:55:34.861682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.151 ms 00:23:35.058 [2024-12-16 10:55:34.861692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.058 [2024-12-16 10:55:34.863602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.058 [2024-12-16 10:55:34.863641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:35.058 [2024-12-16 10:55:34.863650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.845 ms 00:23:35.059 [2024-12-16 10:55:34.863659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.059 [2024-12-16 10:55:34.870138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.059 [2024-12-16 10:55:34.870258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:35.059 [2024-12-16 10:55:34.870294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.439 ms 00:23:35.059 [2024-12-16 10:55:34.870329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.059 [2024-12-16 10:55:34.870721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.059 [2024-12-16 10:55:34.870775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:35.059 [2024-12-16 10:55:34.870803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:23:35.059 [2024-12-16 10:55:34.870829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.059 [2024-12-16 10:55:34.874399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.059 [2024-12-16 10:55:34.874485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:35.059 [2024-12-16 10:55:34.874511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:23:35.059 [2024-12-16 10:55:34.874536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.059 [2024-12-16 10:55:34.877240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.059 [2024-12-16 10:55:34.877285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:35.059 [2024-12-16 10:55:34.877293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.625 ms 00:23:35.059 [2024-12-16 10:55:34.877302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.059 [2024-12-16 10:55:34.879472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.059 [2024-12-16 10:55:34.879608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:35.059 [2024-12-16 10:55:34.879623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:23:35.059 [2024-12-16 10:55:34.879632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.059 [2024-12-16 10:55:34.881402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.059 [2024-12-16 10:55:34.881440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:35.059 [2024-12-16 10:55:34.881449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:23:35.059 [2024-12-16 10:55:34.881459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.059 [2024-12-16 10:55:34.881490] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:35.059 [2024-12-16 10:55:34.881506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.881998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:35.059 [2024-12-16 10:55:34.882359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:35.060 [2024-12-16 10:55:34.882368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:35.060 [2024-12-16 10:55:34.882376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:35.060 [2024-12-16 10:55:34.882385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:35.060 [2024-12-16 10:55:34.882393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:35.060 [2024-12-16 10:55:34.882411] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:35.060 [2024-12-16 10:55:34.882418] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12c7a348-302f-4b33-b558-f5d26dfceb2c 00:23:35.060 [2024-12-16 10:55:34.882428] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:35.060 [2024-12-16 10:55:34.882437] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:35.060 [2024-12-16 10:55:34.882445] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:35.060 [2024-12-16 10:55:34.882452] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:35.060 [2024-12-16 10:55:34.882461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:35.060 [2024-12-16 10:55:34.882468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:35.060 [2024-12-16 10:55:34.882477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:35.060 [2024-12-16 10:55:34.882483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:35.060 [2024-12-16 10:55:34.882491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:35.060 [2024-12-16 10:55:34.882498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.060 [2024-12-16 10:55:34.882507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:35.060 [2024-12-16 10:55:34.882520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:23:35.060 [2024-12-16 10:55:34.882529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.884030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.060 [2024-12-16 10:55:34.884061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:35.060 [2024-12-16 10:55:34.884070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:23:35.060 [2024-12-16 10:55:34.884079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.884157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.060 [2024-12-16 10:55:34.884167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:35.060 [2024-12-16 10:55:34.884175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:35.060 [2024-12-16 10:55:34.884183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.889591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.889629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.060 [2024-12-16 10:55:34.889643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.889651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.889706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.889716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.060 [2024-12-16 10:55:34.889723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.889732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.889799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.889812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.060 [2024-12-16 10:55:34.889820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.889829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.889846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.889854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.060 [2024-12-16 10:55:34.889861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.889870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.898985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.899024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.060 [2024-12-16 10:55:34.899033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.899042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.906837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.906879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.060 [2024-12-16 10:55:34.906894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.906903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.907149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.907195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.060 [2024-12-16 10:55:34.907210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.907219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.907277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.907289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.060 [2024-12-16 10:55:34.907296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.907306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.907373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.907391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.060 [2024-12-16 10:55:34.907400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.907408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.907442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.907453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:35.060 [2024-12-16 10:55:34.907460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.907469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.907506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.907518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.060 [2024-12-16 10:55:34.907527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.907536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.907580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.060 [2024-12-16 10:55:34.907596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.060 [2024-12-16 10:55:34.907603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.060 [2024-12-16 10:55:34.907611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.060 [2024-12-16 10:55:34.907739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.829 ms, result 0 00:23:35.060 true 00:23:35.060 10:55:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88834 00:23:35.060 10:55:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88834 00:23:35.060 10:55:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:35.060 [2024-12-16 10:55:34.985659] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:35.060 [2024-12-16 10:55:34.985782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89771 ] 00:23:35.321 [2024-12-16 10:55:35.121582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.321 [2024-12-16 10:55:35.168049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.264  [2024-12-16T10:55:37.634Z] Copying: 193/1024 [MB] (193 MBps) [2024-12-16T10:55:38.568Z] Copying: 403/1024 [MB] (209 MBps) [2024-12-16T10:55:39.502Z] Copying: 664/1024 [MB] (261 MBps) [2024-12-16T10:55:39.761Z] Copying: 920/1024 [MB] (256 MBps) [2024-12-16T10:55:40.019Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:23:40.030 00:23:40.030 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88834 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:40.030 10:55:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:40.030 [2024-12-16 10:55:39.844264] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:40.030 [2024-12-16 10:55:39.844390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89825 ] 00:23:40.030 [2024-12-16 10:55:39.979532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.288 [2024-12-16 10:55:40.030340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.288 [2024-12-16 10:55:40.116721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.289 [2024-12-16 10:55:40.116772] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.289 [2024-12-16 10:55:40.178332] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:40.289 [2024-12-16 10:55:40.178625] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:40.289 [2024-12-16 10:55:40.178833] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:40.548 [2024-12-16 10:55:40.378943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.548 [2024-12-16 10:55:40.379067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.548 [2024-12-16 10:55:40.379083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.548 [2024-12-16 10:55:40.379098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.548 [2024-12-16 10:55:40.379138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.548 [2024-12-16 10:55:40.379146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.548 [2024-12-16 10:55:40.379154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:40.549 [2024-12-16 10:55:40.379160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.379175] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.549 [2024-12-16 10:55:40.379353] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.549 [2024-12-16 10:55:40.379367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.379376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.549 [2024-12-16 10:55:40.379383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:23:40.549 [2024-12-16 10:55:40.379390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.380312] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.549 [2024-12-16 10:55:40.382606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.382726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.549 [2024-12-16 10:55:40.382739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:23:40.549 [2024-12-16 10:55:40.382745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.382784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.382791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.549 [2024-12-16 10:55:40.382798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:40.549 [2024-12-16 10:55:40.382805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.387170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.387195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.549 [2024-12-16 10:55:40.387203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.322 ms 00:23:40.549 [2024-12-16 10:55:40.387209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.387274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.387282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.549 [2024-12-16 10:55:40.387288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:40.549 [2024-12-16 10:55:40.387293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.387328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.387344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.549 [2024-12-16 10:55:40.387350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.549 [2024-12-16 10:55:40.387359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.387376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.549 [2024-12-16 10:55:40.388515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.388545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.549 [2024-12-16 10:55:40.388552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:23:40.549 [2024-12-16 10:55:40.388560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.388582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.388591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.549 [2024-12-16 10:55:40.388597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:40.549 [2024-12-16 10:55:40.388603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.388627] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.549 [2024-12-16 10:55:40.388641] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.549 [2024-12-16 10:55:40.388674] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.549 [2024-12-16 10:55:40.388686] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:40.549 [2024-12-16 10:55:40.388775] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.549 [2024-12-16 10:55:40.388782] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.549 [2024-12-16 10:55:40.388790] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.549 [2024-12-16 10:55:40.388798] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.549 [2024-12-16 10:55:40.388805] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.549 [2024-12-16 10:55:40.388810] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:40.549 [2024-12-16 10:55:40.388816] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.549 [2024-12-16 10:55:40.388821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.549 [2024-12-16 10:55:40.388826] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.549 [2024-12-16 10:55:40.388831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.388839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.549 [2024-12-16 10:55:40.388845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:23:40.549 [2024-12-16 10:55:40.388850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.388913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.549 [2024-12-16 10:55:40.388921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.549 [2024-12-16 10:55:40.388940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:40.549 [2024-12-16 10:55:40.388946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.549 [2024-12-16 10:55:40.389017] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.549 [2024-12-16 10:55:40.389024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.549 [2024-12-16 10:55:40.389033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.549 [2024-12-16 10:55:40.389039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.549 [2024-12-16 10:55:40.389053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:40.549 [2024-12-16 10:55:40.389064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.549 [2024-12-16 10:55:40.389069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.549 [2024-12-16 10:55:40.389080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.549 [2024-12-16 10:55:40.389085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:40.549 [2024-12-16 10:55:40.389090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.549 [2024-12-16 10:55:40.389100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.549 [2024-12-16 10:55:40.389105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:40.549 [2024-12-16 10:55:40.389110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.549 [2024-12-16 10:55:40.389120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:40.549 [2024-12-16 10:55:40.389125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.549 [2024-12-16 10:55:40.389135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.549 [2024-12-16 10:55:40.389145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.549 [2024-12-16 10:55:40.389150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.549 [2024-12-16 10:55:40.389160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.549 [2024-12-16 10:55:40.389165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.549 [2024-12-16 10:55:40.389176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.549 [2024-12-16 10:55:40.389186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.549 [2024-12-16 10:55:40.389197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.549 [2024-12-16 10:55:40.389203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.549 [2024-12-16 10:55:40.389214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.549 [2024-12-16 10:55:40.389220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:40.549 [2024-12-16 10:55:40.389225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.549 [2024-12-16 10:55:40.389231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.549 [2024-12-16 10:55:40.389236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:40.549 [2024-12-16 10:55:40.389242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.549 [2024-12-16 10:55:40.389253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:40.549 [2024-12-16 10:55:40.389260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.549 [2024-12-16 10:55:40.389265] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.549 [2024-12-16 10:55:40.389272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.549 [2024-12-16 10:55:40.389279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.550 [2024-12-16 10:55:40.389285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.550 [2024-12-16 10:55:40.389292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.550 [2024-12-16 10:55:40.389297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.550 [2024-12-16 10:55:40.389303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.550 [2024-12-16 10:55:40.389309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.550 [2024-12-16 10:55:40.389315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.550 [2024-12-16 10:55:40.389321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.550 [2024-12-16 10:55:40.389327] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.550 [2024-12-16 10:55:40.389334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.550 [2024-12-16 10:55:40.389341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:40.550 [2024-12-16 10:55:40.389347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:40.550 [2024-12-16 10:55:40.389353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:40.550 [2024-12-16 10:55:40.389359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:40.550 [2024-12-16 10:55:40.389364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:40.550 [2024-12-16 10:55:40.389374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:40.550 [2024-12-16 10:55:40.389380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:40.550 [2024-12-16 10:55:40.389386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:40.550 [2024-12-16 10:55:40.389391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:40.550 [2024-12-16 10:55:40.389397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:40.550 [2024-12-16 10:55:40.389402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:40.550 [2024-12-16 10:55:40.389407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:40.550 [2024-12-16 10:55:40.389412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:40.550 [2024-12-16 10:55:40.389418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:40.550 [2024-12-16 10:55:40.389423] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.550 [2024-12-16 10:55:40.389429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.550 [2024-12-16 10:55:40.389437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.550 [2024-12-16 10:55:40.389442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.550 [2024-12-16 10:55:40.389447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.550 [2024-12-16 10:55:40.389453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.550 [2024-12-16 10:55:40.389459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.389466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.550 [2024-12-16 10:55:40.389473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:23:40.550 [2024-12-16 10:55:40.389478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.406241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.406419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.550 [2024-12-16 10:55:40.406622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.727 ms 00:23:40.550 [2024-12-16 10:55:40.406667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.406815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.407048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.550 [2024-12-16 10:55:40.407268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:40.550 [2024-12-16 10:55:40.407373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.416974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.417116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.550 [2024-12-16 10:55:40.417189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.494 ms 00:23:40.550 [2024-12-16 10:55:40.417223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.417323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.417346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.550 [2024-12-16 10:55:40.417362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.550 [2024-12-16 10:55:40.417651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.418028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.418119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.550 [2024-12-16 10:55:40.418171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:23:40.550 [2024-12-16 10:55:40.418193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.418308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.418326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.550 [2024-12-16 10:55:40.418368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:40.550 [2024-12-16 10:55:40.418384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.422391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.422475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.550 [2024-12-16 10:55:40.422523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.982 ms 00:23:40.550 [2024-12-16 10:55:40.422540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.424664] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:40.550 [2024-12-16 10:55:40.424700] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.550 [2024-12-16 10:55:40.424708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.424717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.550 [2024-12-16 10:55:40.424723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.105 ms 00:23:40.550 [2024-12-16 10:55:40.424729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.436212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.436251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.550 [2024-12-16 10:55:40.436259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.454 ms 00:23:40.550 [2024-12-16 10:55:40.436270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.438048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.438075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.550 [2024-12-16 10:55:40.438082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:23:40.550 [2024-12-16 10:55:40.438087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.439442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.439467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.550 [2024-12-16 10:55:40.439473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.327 ms 00:23:40.550 [2024-12-16 10:55:40.439479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.439727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.439741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.550 [2024-12-16 10:55:40.439752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:23:40.550 [2024-12-16 10:55:40.439758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.454088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.454124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.550 [2024-12-16 10:55:40.454133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.317 ms 00:23:40.550 [2024-12-16 10:55:40.454140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.459945] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:40.550 [2024-12-16 10:55:40.462063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.462089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.550 [2024-12-16 10:55:40.462096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.891 ms 00:23:40.550 [2024-12-16 10:55:40.462103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.462156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.462166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.550 [2024-12-16 10:55:40.462174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:40.550 [2024-12-16 10:55:40.462181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.462239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.462247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.550 [2024-12-16 10:55:40.462256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:40.550 [2024-12-16 10:55:40.462262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.462280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.550 [2024-12-16 10:55:40.462287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.550 [2024-12-16 10:55:40.462293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:40.550 [2024-12-16 10:55:40.462298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.550 [2024-12-16 10:55:40.462323] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.551 [2024-12-16 10:55:40.462332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.551 [2024-12-16 10:55:40.462338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.551 [2024-12-16 10:55:40.462344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:40.551 [2024-12-16 10:55:40.462354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.551 [2024-12-16 10:55:40.465923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.551 [2024-12-16 10:55:40.465960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.551 [2024-12-16 10:55:40.465968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.556 ms 00:23:40.551 [2024-12-16 10:55:40.465974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.551 [2024-12-16 10:55:40.466029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.551 [2024-12-16 10:55:40.466038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.551 [2024-12-16 10:55:40.466048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:40.551 [2024-12-16 10:55:40.466055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.551 [2024-12-16 10:55:40.467103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 87.828 ms, result 0 00:23:41.550  [2024-12-16T10:55:42.916Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-16T10:55:43.481Z] Copying: 37/1024 [MB] (18 MBps) [2024-12-16T10:55:44.853Z] Copying: 61/1024 [MB] (24 MBps) [2024-12-16T10:55:45.791Z] Copying: 86/1024 [MB] (24 MBps) [2024-12-16T10:55:46.735Z] Copying: 109/1024 [MB] (23 MBps) [2024-12-16T10:55:47.669Z] Copying: 121/1024 [MB] (12 MBps) [2024-12-16T10:55:48.602Z] Copying: 154/1024 [MB] (32 MBps) [2024-12-16T10:55:49.544Z] Copying: 188/1024 [MB] (33 MBps) [2024-12-16T10:55:50.483Z] Copying: 215/1024 [MB] (27 MBps) [2024-12-16T10:55:51.870Z] Copying: 234/1024 [MB] (18 MBps) [2024-12-16T10:55:52.812Z] Copying: 254/1024 [MB] (19 MBps) [2024-12-16T10:55:53.755Z] Copying: 266/1024 [MB] (12 MBps) [2024-12-16T10:55:54.700Z] Copying: 279/1024 [MB] (13 MBps) [2024-12-16T10:55:55.644Z] Copying: 294/1024 [MB] (14 MBps) [2024-12-16T10:55:56.589Z] Copying: 307/1024 [MB] (13 MBps) [2024-12-16T10:55:57.532Z] Copying: 323/1024 [MB] (16 MBps) [2024-12-16T10:55:58.922Z] Copying: 343/1024 [MB] (19 MBps) [2024-12-16T10:55:59.494Z] Copying: 359/1024 [MB] (15 MBps) [2024-12-16T10:56:00.883Z] Copying: 376/1024 [MB] (16 MBps) [2024-12-16T10:56:01.828Z] Copying: 392/1024 [MB] (16 MBps) [2024-12-16T10:56:02.773Z] Copying: 411/1024 [MB] (18 MBps) [2024-12-16T10:56:03.719Z] Copying: 421/1024 [MB] (10 MBps) [2024-12-16T10:56:04.665Z] Copying: 432/1024 [MB] (10 MBps) [2024-12-16T10:56:05.610Z] Copying: 442/1024 [MB] (10 MBps) [2024-12-16T10:56:06.551Z] Copying: 453/1024 [MB] (10 MBps) [2024-12-16T10:56:07.496Z] Copying: 474/1024 [MB] (20 MBps) [2024-12-16T10:56:08.887Z] Copying: 493/1024 [MB] (19 MBps) [2024-12-16T10:56:09.836Z] Copying: 510/1024 [MB] (16 MBps) [2024-12-16T10:56:10.808Z] Copying: 527/1024 [MB] (17 MBps) [2024-12-16T10:56:11.754Z] Copying: 543/1024 [MB] (15 MBps) [2024-12-16T10:56:12.700Z] Copying: 559/1024 [MB] (16 MBps) [2024-12-16T10:56:13.646Z] Copying: 576/1024 [MB] (17 MBps) [2024-12-16T10:56:14.593Z] Copying: 596/1024 [MB] (19 MBps) [2024-12-16T10:56:15.540Z] Copying: 609/1024 [MB] (12 MBps) [2024-12-16T10:56:16.483Z] Copying: 623/1024 [MB] (14 MBps) [2024-12-16T10:56:17.868Z] Copying: 644/1024 [MB] (21 MBps) [2024-12-16T10:56:18.812Z] Copying: 663/1024 [MB] (19 MBps) [2024-12-16T10:56:19.754Z] Copying: 682/1024 [MB] (18 MBps) [2024-12-16T10:56:20.697Z] Copying: 695/1024 [MB] (13 MBps) [2024-12-16T10:56:21.643Z] Copying: 708/1024 [MB] (13 MBps) [2024-12-16T10:56:22.589Z] Copying: 724/1024 [MB] (15 MBps) [2024-12-16T10:56:23.534Z] Copying: 740/1024 [MB] (16 MBps) [2024-12-16T10:56:24.922Z] Copying: 758/1024 [MB] (18 MBps) [2024-12-16T10:56:25.494Z] Copying: 787032/1048576 [kB] (10084 kBps) [2024-12-16T10:56:26.880Z] Copying: 785/1024 [MB] (16 MBps) [2024-12-16T10:56:27.827Z] Copying: 798/1024 [MB] (13 MBps) [2024-12-16T10:56:28.773Z] Copying: 817/1024 [MB] (19 MBps) [2024-12-16T10:56:29.719Z] Copying: 832/1024 [MB] (15 MBps) [2024-12-16T10:56:30.661Z] Copying: 842/1024 [MB] (10 MBps) [2024-12-16T10:56:31.605Z] Copying: 873168/1048576 [kB] (10112 kBps) [2024-12-16T10:56:32.546Z] Copying: 883256/1048576 [kB] (10088 kBps) [2024-12-16T10:56:33.484Z] Copying: 874/1024 [MB] (11 MBps) [2024-12-16T10:56:34.871Z] Copying: 896/1024 [MB] (22 MBps) [2024-12-16T10:56:35.813Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-16T10:56:36.759Z] Copying: 936/1024 [MB] (29 MBps) [2024-12-16T10:56:37.697Z] Copying: 951/1024 [MB] (15 MBps) [2024-12-16T10:56:38.636Z] Copying: 982/1024 [MB] (30 MBps) [2024-12-16T10:56:39.649Z] Copying: 1009/1024 [MB] (26 MBps) [2024-12-16T10:56:40.588Z] Copying: 1023/1024 [MB] (13 MBps) [2024-12-16T10:56:40.588Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 10:56:40.369849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.599 [2024-12-16 10:56:40.369940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:40.599 [2024-12-16 10:56:40.369957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:40.599 [2024-12-16 10:56:40.369967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.599 [2024-12-16 10:56:40.371953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:40.599 [2024-12-16 10:56:40.374221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.599 [2024-12-16 10:56:40.374272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:40.599 [2024-12-16 10:56:40.374285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:24:40.599 [2024-12-16 10:56:40.374301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.599 [2024-12-16 10:56:40.387483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.599 [2024-12-16 10:56:40.387659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:40.599 [2024-12-16 10:56:40.387683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.366 ms 00:24:40.599 [2024-12-16 10:56:40.387692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.599 [2024-12-16 10:56:40.411519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.599 [2024-12-16 10:56:40.411568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:40.599 [2024-12-16 10:56:40.411589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.803 ms 00:24:40.599 [2024-12-16 10:56:40.411598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.599 [2024-12-16 10:56:40.417797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.599 [2024-12-16 10:56:40.417837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:40.599 [2024-12-16 10:56:40.417850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.163 ms 00:24:40.599 [2024-12-16 10:56:40.417860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.599 [2024-12-16 10:56:40.420784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.599 [2024-12-16 10:56:40.420830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:40.599 [2024-12-16 10:56:40.420841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:24:40.599 [2024-12-16 10:56:40.420848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.599 [2024-12-16 10:56:40.424838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.599 [2024-12-16 10:56:40.425040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:40.599 [2024-12-16 10:56:40.425067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.948 ms 00:24:40.599 [2024-12-16 10:56:40.425075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.861 [2024-12-16 10:56:40.609654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.861 [2024-12-16 10:56:40.609729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:40.861 [2024-12-16 10:56:40.609745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 184.533 ms 00:24:40.861 [2024-12-16 10:56:40.609756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.861 [2024-12-16 10:56:40.613091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.861 [2024-12-16 10:56:40.613138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:40.861 [2024-12-16 10:56:40.613149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.317 ms 00:24:40.861 [2024-12-16 10:56:40.613157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.861 [2024-12-16 10:56:40.615921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.861 [2024-12-16 10:56:40.615975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:40.861 [2024-12-16 10:56:40.615985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:24:40.861 [2024-12-16 10:56:40.615992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.861 [2024-12-16 10:56:40.618229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.861 [2024-12-16 10:56:40.618274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:40.861 [2024-12-16 10:56:40.618284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:24:40.861 [2024-12-16 10:56:40.618291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.861 [2024-12-16 10:56:40.620479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.861 [2024-12-16 10:56:40.620524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:40.861 [2024-12-16 10:56:40.620535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:24:40.861 [2024-12-16 10:56:40.620542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.861 [2024-12-16 10:56:40.620580] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:40.861 [2024-12-16 10:56:40.620605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 98048 / 261120 wr_cnt: 1 state: open 00:24:40.861 [2024-12-16 10:56:40.620616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.620997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:40.861 [2024-12-16 10:56:40.621196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:40.862 [2024-12-16 10:56:40.621495] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:40.862 [2024-12-16 10:56:40.621506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12c7a348-302f-4b33-b558-f5d26dfceb2c 00:24:40.862 [2024-12-16 10:56:40.621522] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 98048 00:24:40.862 [2024-12-16 10:56:40.621531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 99008 00:24:40.862 [2024-12-16 10:56:40.621538] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 98048 00:24:40.862 [2024-12-16 10:56:40.621547] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0098 00:24:40.862 [2024-12-16 10:56:40.621555] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:40.862 [2024-12-16 10:56:40.621563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:40.862 [2024-12-16 10:56:40.621571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:40.862 [2024-12-16 10:56:40.621577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:40.862 [2024-12-16 10:56:40.621584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:40.862 [2024-12-16 10:56:40.621592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.862 [2024-12-16 10:56:40.621600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:40.862 [2024-12-16 10:56:40.621609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:24:40.862 [2024-12-16 10:56:40.621616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.623894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.862 [2024-12-16 10:56:40.624080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:40.862 [2024-12-16 10:56:40.624100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:24:40.862 [2024-12-16 10:56:40.624109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.624243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.862 [2024-12-16 10:56:40.624255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:40.862 [2024-12-16 10:56:40.624268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:40.862 [2024-12-16 10:56:40.624276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.631074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.631255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.862 [2024-12-16 10:56:40.631274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.631283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.631352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.631361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:40.862 [2024-12-16 10:56:40.631375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.631383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.631429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.631438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:40.862 [2024-12-16 10:56:40.631447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.631454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.631470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.631478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:40.862 [2024-12-16 10:56:40.631486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.631497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.645304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.645358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:40.862 [2024-12-16 10:56:40.645370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.645381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.656678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.656746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:40.862 [2024-12-16 10:56:40.656768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.656776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.656833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.656843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:40.862 [2024-12-16 10:56:40.656852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.656860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.656898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.656908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:40.862 [2024-12-16 10:56:40.656948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.656958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.657039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.657049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:40.862 [2024-12-16 10:56:40.657059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.657067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.657096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.657108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:40.862 [2024-12-16 10:56:40.657117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.657125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.657171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.657182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:40.862 [2024-12-16 10:56:40.657190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.657199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.657254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.862 [2024-12-16 10:56:40.657267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:40.862 [2024-12-16 10:56:40.657277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.862 [2024-12-16 10:56:40.657285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.862 [2024-12-16 10:56:40.657429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 288.060 ms, result 0 00:24:41.804 00:24:41.804 00:24:41.804 10:56:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:43.718 10:56:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:43.979 [2024-12-16 10:56:43.765170] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:43.979 [2024-12-16 10:56:43.765328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90477 ] 00:24:43.979 [2024-12-16 10:56:43.906507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.979 [2024-12-16 10:56:43.957446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.239 [2024-12-16 10:56:44.072698] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:44.239 [2024-12-16 10:56:44.073087] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:44.501 [2024-12-16 10:56:44.234770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.234829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:44.501 [2024-12-16 10:56:44.234848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:44.501 [2024-12-16 10:56:44.234857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.234915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.234951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.501 [2024-12-16 10:56:44.234961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:44.501 [2024-12-16 10:56:44.234975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.234997] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:44.501 [2024-12-16 10:56:44.235275] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:44.501 [2024-12-16 10:56:44.235298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.235307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.501 [2024-12-16 10:56:44.235321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:24:44.501 [2024-12-16 10:56:44.235334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.237084] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:44.501 [2024-12-16 10:56:44.240835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.241047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:44.501 [2024-12-16 10:56:44.241070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.753 ms 00:24:44.501 [2024-12-16 10:56:44.241089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.241161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.241177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:44.501 [2024-12-16 10:56:44.241187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:44.501 [2024-12-16 10:56:44.241194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.249315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.249361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.501 [2024-12-16 10:56:44.249373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.073 ms 00:24:44.501 [2024-12-16 10:56:44.249391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.249497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.249512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.501 [2024-12-16 10:56:44.249521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:44.501 [2024-12-16 10:56:44.249528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.249586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.249600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:44.501 [2024-12-16 10:56:44.249610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:44.501 [2024-12-16 10:56:44.249618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.249640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:44.501 [2024-12-16 10:56:44.251709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.251751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.501 [2024-12-16 10:56:44.251760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.074 ms 00:24:44.501 [2024-12-16 10:56:44.251776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.251813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.251821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:44.501 [2024-12-16 10:56:44.251831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:44.501 [2024-12-16 10:56:44.251838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.251860] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:44.501 [2024-12-16 10:56:44.251888] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:44.501 [2024-12-16 10:56:44.251952] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:44.501 [2024-12-16 10:56:44.251972] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:44.501 [2024-12-16 10:56:44.252095] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:44.501 [2024-12-16 10:56:44.252107] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:44.501 [2024-12-16 10:56:44.252118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:44.501 [2024-12-16 10:56:44.252129] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:44.501 [2024-12-16 10:56:44.252142] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:44.501 [2024-12-16 10:56:44.252150] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:44.501 [2024-12-16 10:56:44.252163] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:44.501 [2024-12-16 10:56:44.252174] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:44.501 [2024-12-16 10:56:44.252182] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:44.501 [2024-12-16 10:56:44.252189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.252197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:44.501 [2024-12-16 10:56:44.252205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:24:44.501 [2024-12-16 10:56:44.252216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.252300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.501 [2024-12-16 10:56:44.252311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:44.501 [2024-12-16 10:56:44.252321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:44.501 [2024-12-16 10:56:44.252329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.501 [2024-12-16 10:56:44.252426] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:44.501 [2024-12-16 10:56:44.252438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:44.501 [2024-12-16 10:56:44.252451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.501 [2024-12-16 10:56:44.252461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.501 [2024-12-16 10:56:44.252470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:44.501 [2024-12-16 10:56:44.252478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:44.501 [2024-12-16 10:56:44.252485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:44.501 [2024-12-16 10:56:44.252494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:44.501 [2024-12-16 10:56:44.252502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:44.501 [2024-12-16 10:56:44.252510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.501 [2024-12-16 10:56:44.252518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:44.501 [2024-12-16 10:56:44.252526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:44.501 [2024-12-16 10:56:44.252534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.501 [2024-12-16 10:56:44.252550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:44.501 [2024-12-16 10:56:44.252558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:44.501 [2024-12-16 10:56:44.252565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.501 [2024-12-16 10:56:44.252573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:44.501 [2024-12-16 10:56:44.252582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:44.502 [2024-12-16 10:56:44.252591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:44.502 [2024-12-16 10:56:44.252607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.502 [2024-12-16 10:56:44.252623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:44.502 [2024-12-16 10:56:44.252631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.502 [2024-12-16 10:56:44.252647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:44.502 [2024-12-16 10:56:44.252654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.502 [2024-12-16 10:56:44.252670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:44.502 [2024-12-16 10:56:44.252680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.502 [2024-12-16 10:56:44.252696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:44.502 [2024-12-16 10:56:44.252721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.502 [2024-12-16 10:56:44.252737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:44.502 [2024-12-16 10:56:44.252744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:44.502 [2024-12-16 10:56:44.252751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.502 [2024-12-16 10:56:44.252759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:44.502 [2024-12-16 10:56:44.252767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:44.502 [2024-12-16 10:56:44.252775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:44.502 [2024-12-16 10:56:44.252790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:44.502 [2024-12-16 10:56:44.252797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:44.502 [2024-12-16 10:56:44.252814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:44.502 [2024-12-16 10:56:44.252826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.502 [2024-12-16 10:56:44.252837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.502 [2024-12-16 10:56:44.252846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:44.502 [2024-12-16 10:56:44.252854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:44.502 [2024-12-16 10:56:44.252861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:44.502 [2024-12-16 10:56:44.252873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:44.502 [2024-12-16 10:56:44.252880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:44.502 [2024-12-16 10:56:44.252887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:44.502 [2024-12-16 10:56:44.252896] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:44.502 [2024-12-16 10:56:44.252912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.502 [2024-12-16 10:56:44.252921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:44.502 [2024-12-16 10:56:44.252944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:44.502 [2024-12-16 10:56:44.252952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:44.502 [2024-12-16 10:56:44.252959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:44.502 [2024-12-16 10:56:44.252966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:44.502 [2024-12-16 10:56:44.252974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:44.502 [2024-12-16 10:56:44.252984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:44.502 [2024-12-16 10:56:44.252992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:44.502 [2024-12-16 10:56:44.252999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:44.502 [2024-12-16 10:56:44.253012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:44.502 [2024-12-16 10:56:44.253020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:44.502 [2024-12-16 10:56:44.253027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:44.502 [2024-12-16 10:56:44.253034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:44.502 [2024-12-16 10:56:44.253042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:44.502 [2024-12-16 10:56:44.253049] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:44.502 [2024-12-16 10:56:44.253057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.502 [2024-12-16 10:56:44.253066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:44.502 [2024-12-16 10:56:44.253073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:44.502 [2024-12-16 10:56:44.253081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:44.502 [2024-12-16 10:56:44.253088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:44.502 [2024-12-16 10:56:44.253095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.253103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:44.502 [2024-12-16 10:56:44.253113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:24:44.502 [2024-12-16 10:56:44.253120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.275877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.275959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:44.502 [2024-12-16 10:56:44.275975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.708 ms 00:24:44.502 [2024-12-16 10:56:44.275990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.276103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.276123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:44.502 [2024-12-16 10:56:44.276133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:44.502 [2024-12-16 10:56:44.276143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.287831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.287882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:44.502 [2024-12-16 10:56:44.287894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.617 ms 00:24:44.502 [2024-12-16 10:56:44.287903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.287952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.287962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:44.502 [2024-12-16 10:56:44.287971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.502 [2024-12-16 10:56:44.287979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.288513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.288562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:44.502 [2024-12-16 10:56:44.288574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:24:44.502 [2024-12-16 10:56:44.288583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.288751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.288762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:44.502 [2024-12-16 10:56:44.288772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:24:44.502 [2024-12-16 10:56:44.288781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.295720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.295771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:44.502 [2024-12-16 10:56:44.295781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:24:44.502 [2024-12-16 10:56:44.295789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.299698] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:44.502 [2024-12-16 10:56:44.299749] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:44.502 [2024-12-16 10:56:44.299765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.299774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:44.502 [2024-12-16 10:56:44.299783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.878 ms 00:24:44.502 [2024-12-16 10:56:44.299791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.315598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.315650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:44.502 [2024-12-16 10:56:44.315662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.755 ms 00:24:44.502 [2024-12-16 10:56:44.315676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.502 [2024-12-16 10:56:44.318676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.502 [2024-12-16 10:56:44.318850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:44.503 [2024-12-16 10:56:44.318869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:24:44.503 [2024-12-16 10:56:44.318877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.321342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.321387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:44.503 [2024-12-16 10:56:44.321397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:24:44.503 [2024-12-16 10:56:44.321404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.321746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.321761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:44.503 [2024-12-16 10:56:44.321775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:24:44.503 [2024-12-16 10:56:44.321783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.346668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.346873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:44.503 [2024-12-16 10:56:44.346946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.866 ms 00:24:44.503 [2024-12-16 10:56:44.346971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.355283] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:44.503 [2024-12-16 10:56:44.359014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.359164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:44.503 [2024-12-16 10:56:44.359219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.989 ms 00:24:44.503 [2024-12-16 10:56:44.359244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.359351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.359383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:44.503 [2024-12-16 10:56:44.359395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:44.503 [2024-12-16 10:56:44.359408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.361314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.361724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:44.503 [2024-12-16 10:56:44.361764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.861 ms 00:24:44.503 [2024-12-16 10:56:44.361774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.361837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.361856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:44.503 [2024-12-16 10:56:44.361873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:44.503 [2024-12-16 10:56:44.361882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.361923] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:44.503 [2024-12-16 10:56:44.361953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.361965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:44.503 [2024-12-16 10:56:44.361974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:44.503 [2024-12-16 10:56:44.361986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.367743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.367797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:44.503 [2024-12-16 10:56:44.367810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.731 ms 00:24:44.503 [2024-12-16 10:56:44.367819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.367912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.503 [2024-12-16 10:56:44.367923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:44.503 [2024-12-16 10:56:44.367949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:44.503 [2024-12-16 10:56:44.367958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.503 [2024-12-16 10:56:44.369256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.961 ms, result 0 00:24:45.885  [2024-12-16T10:56:46.818Z] Copying: 1108/1048576 [kB] (1108 kBps) [2024-12-16T10:56:47.763Z] Copying: 4064/1048576 [kB] (2956 kBps) [2024-12-16T10:56:48.699Z] Copying: 13992/1048576 [kB] (9928 kBps) [2024-12-16T10:56:49.642Z] Copying: 48/1024 [MB] (34 MBps) [2024-12-16T10:56:50.587Z] Copying: 77/1024 [MB] (29 MBps) [2024-12-16T10:56:51.974Z] Copying: 93/1024 [MB] (15 MBps) [2024-12-16T10:56:52.920Z] Copying: 120/1024 [MB] (27 MBps) [2024-12-16T10:56:53.864Z] Copying: 144/1024 [MB] (23 MBps) [2024-12-16T10:56:54.807Z] Copying: 166/1024 [MB] (22 MBps) [2024-12-16T10:56:55.748Z] Copying: 192/1024 [MB] (25 MBps) [2024-12-16T10:56:56.689Z] Copying: 215/1024 [MB] (22 MBps) [2024-12-16T10:56:57.632Z] Copying: 231/1024 [MB] (16 MBps) [2024-12-16T10:56:58.573Z] Copying: 260/1024 [MB] (29 MBps) [2024-12-16T10:56:59.960Z] Copying: 285/1024 [MB] (25 MBps) [2024-12-16T10:57:00.902Z] Copying: 303/1024 [MB] (17 MBps) [2024-12-16T10:57:01.846Z] Copying: 318/1024 [MB] (15 MBps) [2024-12-16T10:57:02.790Z] Copying: 334/1024 [MB] (16 MBps) [2024-12-16T10:57:03.736Z] Copying: 351/1024 [MB] (16 MBps) [2024-12-16T10:57:04.682Z] Copying: 367/1024 [MB] (15 MBps) [2024-12-16T10:57:05.625Z] Copying: 382/1024 [MB] (15 MBps) [2024-12-16T10:57:06.568Z] Copying: 404/1024 [MB] (21 MBps) [2024-12-16T10:57:07.988Z] Copying: 424/1024 [MB] (20 MBps) [2024-12-16T10:57:08.566Z] Copying: 445/1024 [MB] (20 MBps) [2024-12-16T10:57:09.954Z] Copying: 464/1024 [MB] (18 MBps) [2024-12-16T10:57:10.899Z] Copying: 479/1024 [MB] (15 MBps) [2024-12-16T10:57:11.841Z] Copying: 501/1024 [MB] (21 MBps) [2024-12-16T10:57:12.777Z] Copying: 516/1024 [MB] (15 MBps) [2024-12-16T10:57:13.716Z] Copying: 544/1024 [MB] (28 MBps) [2024-12-16T10:57:14.661Z] Copying: 568/1024 [MB] (23 MBps) [2024-12-16T10:57:15.605Z] Copying: 584/1024 [MB] (15 MBps) [2024-12-16T10:57:16.987Z] Copying: 600/1024 [MB] (15 MBps) [2024-12-16T10:57:17.560Z] Copying: 624/1024 [MB] (24 MBps) [2024-12-16T10:57:18.947Z] Copying: 646/1024 [MB] (22 MBps) [2024-12-16T10:57:19.891Z] Copying: 662/1024 [MB] (15 MBps) [2024-12-16T10:57:20.831Z] Copying: 678/1024 [MB] (15 MBps) [2024-12-16T10:57:21.774Z] Copying: 705/1024 [MB] (27 MBps) [2024-12-16T10:57:22.716Z] Copying: 730/1024 [MB] (24 MBps) [2024-12-16T10:57:23.659Z] Copying: 750/1024 [MB] (20 MBps) [2024-12-16T10:57:24.602Z] Copying: 775/1024 [MB] (24 MBps) [2024-12-16T10:57:25.984Z] Copying: 806/1024 [MB] (31 MBps) [2024-12-16T10:57:26.927Z] Copying: 833/1024 [MB] (26 MBps) [2024-12-16T10:57:27.886Z] Copying: 869/1024 [MB] (35 MBps) [2024-12-16T10:57:28.831Z] Copying: 900/1024 [MB] (31 MBps) [2024-12-16T10:57:29.773Z] Copying: 941/1024 [MB] (40 MBps) [2024-12-16T10:57:30.718Z] Copying: 982/1024 [MB] (40 MBps) [2024-12-16T10:57:31.291Z] Copying: 1008/1024 [MB] (26 MBps) [2024-12-16T10:57:31.861Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-16 10:57:31.692491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.692557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:31.872 [2024-12-16 10:57:31.692574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:31.872 [2024-12-16 10:57:31.692583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.692607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:31.872 [2024-12-16 10:57:31.693358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.693400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:31.872 [2024-12-16 10:57:31.693412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:25:31.872 [2024-12-16 10:57:31.693422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.693734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.693753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:31.872 [2024-12-16 10:57:31.693765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:25:31.872 [2024-12-16 10:57:31.693773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.703845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.703888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:31.872 [2024-12-16 10:57:31.703899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.055 ms 00:25:31.872 [2024-12-16 10:57:31.703912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.709098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.709256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:31.872 [2024-12-16 10:57:31.709273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.141 ms 00:25:31.872 [2024-12-16 10:57:31.709288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.710547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.710585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:31.872 [2024-12-16 10:57:31.710594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:25:31.872 [2024-12-16 10:57:31.710600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.714482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.714521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:31.872 [2024-12-16 10:57:31.714535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.852 ms 00:25:31.872 [2024-12-16 10:57:31.714542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.715491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.715536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:31.872 [2024-12-16 10:57:31.715546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:25:31.872 [2024-12-16 10:57:31.715559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.717462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.717581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:31.872 [2024-12-16 10:57:31.717590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:25:31.872 [2024-12-16 10:57:31.717596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.718900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.718944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:31.872 [2024-12-16 10:57:31.718952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:25:31.872 [2024-12-16 10:57:31.718958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.720210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.720340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:31.872 [2024-12-16 10:57:31.720362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:25:31.872 [2024-12-16 10:57:31.720368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.721490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.872 [2024-12-16 10:57:31.721525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:31.872 [2024-12-16 10:57:31.721533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:25:31.872 [2024-12-16 10:57:31.721538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.872 [2024-12-16 10:57:31.721569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:31.872 [2024-12-16 10:57:31.721582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:31.872 [2024-12-16 10:57:31.721591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:31.872 [2024-12-16 10:57:31.721599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:31.872 [2024-12-16 10:57:31.721715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.721993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:31.873 [2024-12-16 10:57:31.722262] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:31.873 [2024-12-16 10:57:31.722272] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12c7a348-302f-4b33-b558-f5d26dfceb2c 00:25:31.873 [2024-12-16 10:57:31.722284] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:31.873 [2024-12-16 10:57:31.722293] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 166592 00:25:31.873 [2024-12-16 10:57:31.722300] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 164608 00:25:31.873 [2024-12-16 10:57:31.722307] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0121 00:25:31.874 [2024-12-16 10:57:31.722313] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:31.874 [2024-12-16 10:57:31.722319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:31.874 [2024-12-16 10:57:31.722326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:31.874 [2024-12-16 10:57:31.722332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:31.874 [2024-12-16 10:57:31.722338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:31.874 [2024-12-16 10:57:31.722344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.874 [2024-12-16 10:57:31.722353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:31.874 [2024-12-16 10:57:31.722361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:25:31.874 [2024-12-16 10:57:31.722367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.724258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.874 [2024-12-16 10:57:31.724281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:31.874 [2024-12-16 10:57:31.724290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:25:31.874 [2024-12-16 10:57:31.724302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.724403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.874 [2024-12-16 10:57:31.724410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:31.874 [2024-12-16 10:57:31.724417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:31.874 [2024-12-16 10:57:31.724426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.729726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.729762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.874 [2024-12-16 10:57:31.729770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.729776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.729820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.729833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.874 [2024-12-16 10:57:31.729840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.729848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.729883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.729891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.874 [2024-12-16 10:57:31.729897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.729904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.729918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.729925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.874 [2024-12-16 10:57:31.729949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.729955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.740302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.740341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.874 [2024-12-16 10:57:31.740351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.740357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.749054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.749100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.874 [2024-12-16 10:57:31.749109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.749119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.749164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.749176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.874 [2024-12-16 10:57:31.749183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.749189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.749211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.749218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.874 [2024-12-16 10:57:31.749225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.749231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.749293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.749301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.874 [2024-12-16 10:57:31.749312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.749318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.749342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.749350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:31.874 [2024-12-16 10:57:31.749359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.749365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.749399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.749407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.874 [2024-12-16 10:57:31.749413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.749419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.749456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.874 [2024-12-16 10:57:31.749464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.874 [2024-12-16 10:57:31.749471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.874 [2024-12-16 10:57:31.749478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.874 [2024-12-16 10:57:31.749585] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.070 ms, result 0 00:25:32.135 00:25:32.135 00:25:32.135 10:57:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:34.680 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:34.680 10:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:34.680 [2024-12-16 10:57:34.108349] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:34.680 [2024-12-16 10:57:34.108468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90990 ] 00:25:34.680 [2024-12-16 10:57:34.242813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.680 [2024-12-16 10:57:34.272456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.680 [2024-12-16 10:57:34.352514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.680 [2024-12-16 10:57:34.352729] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.680 [2024-12-16 10:57:34.494458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.494491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:34.680 [2024-12-16 10:57:34.494503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:34.680 [2024-12-16 10:57:34.494509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.494545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.494552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.680 [2024-12-16 10:57:34.494559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:34.680 [2024-12-16 10:57:34.494572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.494584] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:34.680 [2024-12-16 10:57:34.494769] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:34.680 [2024-12-16 10:57:34.494779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.494786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.680 [2024-12-16 10:57:34.494793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:25:34.680 [2024-12-16 10:57:34.494799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.495790] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:34.680 [2024-12-16 10:57:34.497742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.497770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:34.680 [2024-12-16 10:57:34.497778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.952 ms 00:25:34.680 [2024-12-16 10:57:34.497784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.497826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.497838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:34.680 [2024-12-16 10:57:34.497846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:34.680 [2024-12-16 10:57:34.497854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.502108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.502132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.680 [2024-12-16 10:57:34.502139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.227 ms 00:25:34.680 [2024-12-16 10:57:34.502145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.502224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.502234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.680 [2024-12-16 10:57:34.502240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:34.680 [2024-12-16 10:57:34.502248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.502288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.502296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:34.680 [2024-12-16 10:57:34.502302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:34.680 [2024-12-16 10:57:34.502307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.502326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:34.680 [2024-12-16 10:57:34.503450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.503476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.680 [2024-12-16 10:57:34.503482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:25:34.680 [2024-12-16 10:57:34.503488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.503515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.503521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:34.680 [2024-12-16 10:57:34.503527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:34.680 [2024-12-16 10:57:34.503532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.503546] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:34.680 [2024-12-16 10:57:34.503563] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:34.680 [2024-12-16 10:57:34.503593] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:34.680 [2024-12-16 10:57:34.503608] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:34.680 [2024-12-16 10:57:34.503686] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:34.680 [2024-12-16 10:57:34.503693] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:34.680 [2024-12-16 10:57:34.503701] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:34.680 [2024-12-16 10:57:34.503712] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:34.680 [2024-12-16 10:57:34.503721] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:34.680 [2024-12-16 10:57:34.503727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:34.680 [2024-12-16 10:57:34.503733] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:34.680 [2024-12-16 10:57:34.503738] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:34.680 [2024-12-16 10:57:34.503747] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:34.680 [2024-12-16 10:57:34.503755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.503760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:34.680 [2024-12-16 10:57:34.503766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:25:34.680 [2024-12-16 10:57:34.503771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.503835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.680 [2024-12-16 10:57:34.503841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:34.680 [2024-12-16 10:57:34.503850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:34.680 [2024-12-16 10:57:34.503855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.680 [2024-12-16 10:57:34.503940] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:34.680 [2024-12-16 10:57:34.503949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:34.680 [2024-12-16 10:57:34.503955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.680 [2024-12-16 10:57:34.503961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.680 [2024-12-16 10:57:34.503967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:34.680 [2024-12-16 10:57:34.503972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:34.680 [2024-12-16 10:57:34.503977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:34.680 [2024-12-16 10:57:34.503983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:34.680 [2024-12-16 10:57:34.503989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:34.680 [2024-12-16 10:57:34.503995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.680 [2024-12-16 10:57:34.504004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:34.680 [2024-12-16 10:57:34.504010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:34.680 [2024-12-16 10:57:34.504018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.680 [2024-12-16 10:57:34.504023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:34.680 [2024-12-16 10:57:34.504028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:34.680 [2024-12-16 10:57:34.504033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.680 [2024-12-16 10:57:34.504038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:34.680 [2024-12-16 10:57:34.504043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:34.680 [2024-12-16 10:57:34.504048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.680 [2024-12-16 10:57:34.504053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:34.680 [2024-12-16 10:57:34.504058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:34.680 [2024-12-16 10:57:34.504064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.680 [2024-12-16 10:57:34.504069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:34.680 [2024-12-16 10:57:34.504073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:34.680 [2024-12-16 10:57:34.504078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.681 [2024-12-16 10:57:34.504083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:34.681 [2024-12-16 10:57:34.504088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:34.681 [2024-12-16 10:57:34.504092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.681 [2024-12-16 10:57:34.504101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:34.681 [2024-12-16 10:57:34.504107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:34.681 [2024-12-16 10:57:34.504112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.681 [2024-12-16 10:57:34.504118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:34.681 [2024-12-16 10:57:34.504123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:34.681 [2024-12-16 10:57:34.504129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.681 [2024-12-16 10:57:34.504135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:34.681 [2024-12-16 10:57:34.504141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:34.681 [2024-12-16 10:57:34.504146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.681 [2024-12-16 10:57:34.504152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:34.681 [2024-12-16 10:57:34.504157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:34.681 [2024-12-16 10:57:34.504163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.681 [2024-12-16 10:57:34.504168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:34.681 [2024-12-16 10:57:34.504174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:34.681 [2024-12-16 10:57:34.504179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.681 [2024-12-16 10:57:34.504185] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:34.681 [2024-12-16 10:57:34.504193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:34.681 [2024-12-16 10:57:34.504200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.681 [2024-12-16 10:57:34.504211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.681 [2024-12-16 10:57:34.504217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:34.681 [2024-12-16 10:57:34.504223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:34.681 [2024-12-16 10:57:34.504229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:34.681 [2024-12-16 10:57:34.504235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:34.681 [2024-12-16 10:57:34.504241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:34.681 [2024-12-16 10:57:34.504247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:34.681 [2024-12-16 10:57:34.504253] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:34.681 [2024-12-16 10:57:34.504261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.681 [2024-12-16 10:57:34.504268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:34.681 [2024-12-16 10:57:34.504273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:34.681 [2024-12-16 10:57:34.504279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:34.681 [2024-12-16 10:57:34.504286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:34.681 [2024-12-16 10:57:34.504292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:34.681 [2024-12-16 10:57:34.504300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:34.681 [2024-12-16 10:57:34.504306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:34.681 [2024-12-16 10:57:34.504312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:34.681 [2024-12-16 10:57:34.504318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:34.681 [2024-12-16 10:57:34.504328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:34.681 [2024-12-16 10:57:34.504334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:34.681 [2024-12-16 10:57:34.504340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:34.681 [2024-12-16 10:57:34.504345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:34.681 [2024-12-16 10:57:34.504351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:34.681 [2024-12-16 10:57:34.504358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:34.681 [2024-12-16 10:57:34.504364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.681 [2024-12-16 10:57:34.504373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:34.681 [2024-12-16 10:57:34.504379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:34.681 [2024-12-16 10:57:34.504385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:34.681 [2024-12-16 10:57:34.504391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:34.681 [2024-12-16 10:57:34.504397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.504405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:34.681 [2024-12-16 10:57:34.504415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:25:34.681 [2024-12-16 10:57:34.504421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.520577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.520617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.681 [2024-12-16 10:57:34.520628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.124 ms 00:25:34.681 [2024-12-16 10:57:34.520634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.520698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.520705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:34.681 [2024-12-16 10:57:34.520711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:34.681 [2024-12-16 10:57:34.520724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.528299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.528334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.681 [2024-12-16 10:57:34.528344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.532 ms 00:25:34.681 [2024-12-16 10:57:34.528352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.528384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.528393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.681 [2024-12-16 10:57:34.528408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:34.681 [2024-12-16 10:57:34.528416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.528740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.528759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.681 [2024-12-16 10:57:34.528768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:25:34.681 [2024-12-16 10:57:34.528777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.528913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.528923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.681 [2024-12-16 10:57:34.528953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:34.681 [2024-12-16 10:57:34.528963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.533512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.533542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.681 [2024-12-16 10:57:34.533563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.526 ms 00:25:34.681 [2024-12-16 10:57:34.533571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.535795] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:34.681 [2024-12-16 10:57:34.535831] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:34.681 [2024-12-16 10:57:34.535846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.535855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:34.681 [2024-12-16 10:57:34.535864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:25:34.681 [2024-12-16 10:57:34.535871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.547083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.547109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:34.681 [2024-12-16 10:57:34.547123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.175 ms 00:25:34.681 [2024-12-16 10:57:34.547129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.548521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.681 [2024-12-16 10:57:34.548622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:34.681 [2024-12-16 10:57:34.548633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:25:34.681 [2024-12-16 10:57:34.548638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.681 [2024-12-16 10:57:34.549845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.549866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:34.682 [2024-12-16 10:57:34.549872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:25:34.682 [2024-12-16 10:57:34.549877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.550124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.550137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:34.682 [2024-12-16 10:57:34.550143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:25:34.682 [2024-12-16 10:57:34.550190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.563691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.563823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:34.682 [2024-12-16 10:57:34.563840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.487 ms 00:25:34.682 [2024-12-16 10:57:34.563846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.569495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:34.682 [2024-12-16 10:57:34.571517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.571541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:34.682 [2024-12-16 10:57:34.571549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.645 ms 00:25:34.682 [2024-12-16 10:57:34.571562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.571603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.571613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:34.682 [2024-12-16 10:57:34.571619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:34.682 [2024-12-16 10:57:34.571625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.572102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.572123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:34.682 [2024-12-16 10:57:34.572131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:25:34.682 [2024-12-16 10:57:34.572137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.572163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.572170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:34.682 [2024-12-16 10:57:34.572176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:34.682 [2024-12-16 10:57:34.572184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.572210] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:34.682 [2024-12-16 10:57:34.572223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.572232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:34.682 [2024-12-16 10:57:34.572238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:34.682 [2024-12-16 10:57:34.572244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.575077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.575108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:34.682 [2024-12-16 10:57:34.575117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.818 ms 00:25:34.682 [2024-12-16 10:57:34.575127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.575182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.682 [2024-12-16 10:57:34.575190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:34.682 [2024-12-16 10:57:34.575196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:34.682 [2024-12-16 10:57:34.575202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.682 [2024-12-16 10:57:34.575895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.126 ms, result 0 00:25:36.066  [2024-12-16T10:57:37.085Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-16T10:57:38.029Z] Copying: 36/1024 [MB] (16 MBps) [2024-12-16T10:57:38.975Z] Copying: 57/1024 [MB] (20 MBps) [2024-12-16T10:57:39.920Z] Copying: 82/1024 [MB] (24 MBps) [2024-12-16T10:57:40.861Z] Copying: 103/1024 [MB] (21 MBps) [2024-12-16T10:57:41.801Z] Copying: 131/1024 [MB] (28 MBps) [2024-12-16T10:57:42.745Z] Copying: 157/1024 [MB] (25 MBps) [2024-12-16T10:57:44.131Z] Copying: 172/1024 [MB] (14 MBps) [2024-12-16T10:57:45.073Z] Copying: 192/1024 [MB] (20 MBps) [2024-12-16T10:57:46.021Z] Copying: 212/1024 [MB] (20 MBps) [2024-12-16T10:57:46.960Z] Copying: 236/1024 [MB] (23 MBps) [2024-12-16T10:57:47.906Z] Copying: 263/1024 [MB] (27 MBps) [2024-12-16T10:57:48.851Z] Copying: 288/1024 [MB] (25 MBps) [2024-12-16T10:57:49.796Z] Copying: 309/1024 [MB] (20 MBps) [2024-12-16T10:57:50.742Z] Copying: 327/1024 [MB] (17 MBps) [2024-12-16T10:57:52.129Z] Copying: 339/1024 [MB] (11 MBps) [2024-12-16T10:57:53.075Z] Copying: 349/1024 [MB] (10 MBps) [2024-12-16T10:57:54.020Z] Copying: 370/1024 [MB] (20 MBps) [2024-12-16T10:57:54.965Z] Copying: 382/1024 [MB] (11 MBps) [2024-12-16T10:57:55.909Z] Copying: 401/1024 [MB] (18 MBps) [2024-12-16T10:57:56.852Z] Copying: 432/1024 [MB] (31 MBps) [2024-12-16T10:57:57.797Z] Copying: 457/1024 [MB] (25 MBps) [2024-12-16T10:57:58.741Z] Copying: 473/1024 [MB] (15 MBps) [2024-12-16T10:58:00.127Z] Copying: 485/1024 [MB] (12 MBps) [2024-12-16T10:58:01.073Z] Copying: 517/1024 [MB] (32 MBps) [2024-12-16T10:58:02.015Z] Copying: 535/1024 [MB] (17 MBps) [2024-12-16T10:58:02.959Z] Copying: 548/1024 [MB] (13 MBps) [2024-12-16T10:58:03.904Z] Copying: 561/1024 [MB] (13 MBps) [2024-12-16T10:58:04.849Z] Copying: 578/1024 [MB] (16 MBps) [2024-12-16T10:58:05.836Z] Copying: 597/1024 [MB] (19 MBps) [2024-12-16T10:58:06.792Z] Copying: 614/1024 [MB] (17 MBps) [2024-12-16T10:58:07.740Z] Copying: 628/1024 [MB] (13 MBps) [2024-12-16T10:58:09.127Z] Copying: 648/1024 [MB] (20 MBps) [2024-12-16T10:58:10.069Z] Copying: 671/1024 [MB] (23 MBps) [2024-12-16T10:58:11.016Z] Copying: 690/1024 [MB] (18 MBps) [2024-12-16T10:58:11.961Z] Copying: 701/1024 [MB] (11 MBps) [2024-12-16T10:58:12.907Z] Copying: 717/1024 [MB] (16 MBps) [2024-12-16T10:58:13.850Z] Copying: 732/1024 [MB] (14 MBps) [2024-12-16T10:58:14.796Z] Copying: 751/1024 [MB] (19 MBps) [2024-12-16T10:58:15.738Z] Copying: 769/1024 [MB] (17 MBps) [2024-12-16T10:58:17.128Z] Copying: 786/1024 [MB] (17 MBps) [2024-12-16T10:58:18.074Z] Copying: 796/1024 [MB] (10 MBps) [2024-12-16T10:58:19.021Z] Copying: 811/1024 [MB] (14 MBps) [2024-12-16T10:58:19.967Z] Copying: 821/1024 [MB] (10 MBps) [2024-12-16T10:58:20.915Z] Copying: 832/1024 [MB] (10 MBps) [2024-12-16T10:58:21.863Z] Copying: 843/1024 [MB] (10 MBps) [2024-12-16T10:58:22.810Z] Copying: 864/1024 [MB] (21 MBps) [2024-12-16T10:58:23.755Z] Copying: 882/1024 [MB] (17 MBps) [2024-12-16T10:58:25.144Z] Copying: 901/1024 [MB] (18 MBps) [2024-12-16T10:58:25.716Z] Copying: 915/1024 [MB] (14 MBps) [2024-12-16T10:58:27.102Z] Copying: 930/1024 [MB] (14 MBps) [2024-12-16T10:58:28.044Z] Copying: 943/1024 [MB] (12 MBps) [2024-12-16T10:58:28.988Z] Copying: 954/1024 [MB] (11 MBps) [2024-12-16T10:58:29.932Z] Copying: 971/1024 [MB] (16 MBps) [2024-12-16T10:58:30.876Z] Copying: 988/1024 [MB] (17 MBps) [2024-12-16T10:58:31.823Z] Copying: 1002/1024 [MB] (13 MBps) [2024-12-16T10:58:32.085Z] Copying: 1020/1024 [MB] (18 MBps) [2024-12-16T10:58:32.348Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 10:58:32.339051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.359 [2024-12-16 10:58:32.339146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:32.359 [2024-12-16 10:58:32.339163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:32.359 [2024-12-16 10:58:32.339174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.359 [2024-12-16 10:58:32.339209] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:32.359 [2024-12-16 10:58:32.340306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.359 [2024-12-16 10:58:32.340367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:32.359 [2024-12-16 10:58:32.340411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:26:32.359 [2024-12-16 10:58:32.340436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.359 [2024-12-16 10:58:32.340740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.359 [2024-12-16 10:58:32.340773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:32.359 [2024-12-16 10:58:32.340796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:26:32.359 [2024-12-16 10:58:32.340818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.359 [2024-12-16 10:58:32.345786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.359 [2024-12-16 10:58:32.345973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:32.359 [2024-12-16 10:58:32.346048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.938 ms 00:26:32.359 [2024-12-16 10:58:32.346086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.354558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.622 [2024-12-16 10:58:32.354723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:32.622 [2024-12-16 10:58:32.354798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.428 ms 00:26:32.622 [2024-12-16 10:58:32.354821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.358261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.622 [2024-12-16 10:58:32.358456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:32.622 [2024-12-16 10:58:32.358531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:26:32.622 [2024-12-16 10:58:32.358555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.363064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.622 [2024-12-16 10:58:32.363224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:32.622 [2024-12-16 10:58:32.363286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.438 ms 00:26:32.622 [2024-12-16 10:58:32.363309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.367920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.622 [2024-12-16 10:58:32.368078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:32.622 [2024-12-16 10:58:32.368135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.476 ms 00:26:32.622 [2024-12-16 10:58:32.368161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.371150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.622 [2024-12-16 10:58:32.371293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:32.622 [2024-12-16 10:58:32.371345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:26:32.622 [2024-12-16 10:58:32.371368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.374008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.622 [2024-12-16 10:58:32.374145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:32.622 [2024-12-16 10:58:32.374195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:26:32.622 [2024-12-16 10:58:32.374216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.376437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.622 [2024-12-16 10:58:32.376574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:32.622 [2024-12-16 10:58:32.376625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:26:32.622 [2024-12-16 10:58:32.376645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.378819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.622 [2024-12-16 10:58:32.378978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:32.622 [2024-12-16 10:58:32.379035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:26:32.622 [2024-12-16 10:58:32.379056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.622 [2024-12-16 10:58:32.379099] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:32.622 [2024-12-16 10:58:32.379137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:32.622 [2024-12-16 10:58:32.379170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:32.622 [2024-12-16 10:58:32.379200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:32.622 [2024-12-16 10:58:32.379785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.379992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:32.623 [2024-12-16 10:58:32.380392] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:32.623 [2024-12-16 10:58:32.380402] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12c7a348-302f-4b33-b558-f5d26dfceb2c 00:26:32.623 [2024-12-16 10:58:32.380416] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:32.623 [2024-12-16 10:58:32.380428] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:32.623 [2024-12-16 10:58:32.380436] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:32.623 [2024-12-16 10:58:32.380446] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:32.623 [2024-12-16 10:58:32.380454] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:32.623 [2024-12-16 10:58:32.380462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:32.623 [2024-12-16 10:58:32.380470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:32.623 [2024-12-16 10:58:32.380476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:32.623 [2024-12-16 10:58:32.380483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:32.623 [2024-12-16 10:58:32.380490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.623 [2024-12-16 10:58:32.380499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:32.623 [2024-12-16 10:58:32.380520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:26:32.623 [2024-12-16 10:58:32.380528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.623 [2024-12-16 10:58:32.382865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.623 [2024-12-16 10:58:32.383017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:32.624 [2024-12-16 10:58:32.383036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.315 ms 00:26:32.624 [2024-12-16 10:58:32.383045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.383172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.624 [2024-12-16 10:58:32.383181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:32.624 [2024-12-16 10:58:32.383191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:26:32.624 [2024-12-16 10:58:32.383200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.389922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.389998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:32.624 [2024-12-16 10:58:32.390008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.390015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.390081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.390090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:32.624 [2024-12-16 10:58:32.390099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.390107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.390169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.390184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:32.624 [2024-12-16 10:58:32.390193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.390200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.390217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.390228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:32.624 [2024-12-16 10:58:32.390236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.390244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.403891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.403964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:32.624 [2024-12-16 10:58:32.403989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.403997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.414794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.414856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:32.624 [2024-12-16 10:58:32.414867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.414875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.414920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.414951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:32.624 [2024-12-16 10:58:32.414960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.414968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.415004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.415013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:32.624 [2024-12-16 10:58:32.415027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.415035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.415103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.415113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:32.624 [2024-12-16 10:58:32.415122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.415130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.415160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.415169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:32.624 [2024-12-16 10:58:32.415178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.415189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.415230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.415241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:32.624 [2024-12-16 10:58:32.415250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.415259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.415308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.624 [2024-12-16 10:58:32.415319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:32.624 [2024-12-16 10:58:32.415331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.624 [2024-12-16 10:58:32.415339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.624 [2024-12-16 10:58:32.415477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.392 ms, result 0 00:26:32.886 00:26:32.886 00:26:32.886 10:58:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:35.487 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:35.487 10:58:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:35.487 10:58:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:35.487 10:58:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:35.487 10:58:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88834 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 88834 ']' 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 88834 00:26:35.487 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (88834) - No such process 00:26:35.487 Process with pid 88834 is not found 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 88834 is not found' 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:35.487 Remove shared memory files 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:35.487 10:58:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:35.746 10:58:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:35.746 10:58:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:35.746 10:58:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:35.746 10:58:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:35.746 ************************************ 00:26:35.746 END TEST ftl_dirty_shutdown 00:26:35.746 ************************************ 00:26:35.746 00:26:35.746 real 4m24.287s 00:26:35.746 user 5m5.611s 00:26:35.746 sys 0m29.485s 00:26:35.746 10:58:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:35.746 10:58:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:35.746 10:58:35 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:35.746 10:58:35 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:35.746 10:58:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.746 10:58:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:35.747 ************************************ 00:26:35.747 START TEST ftl_upgrade_shutdown 00:26:35.747 ************************************ 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:35.747 * Looking for test storage... 00:26:35.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.747 --rc genhtml_branch_coverage=1 00:26:35.747 --rc genhtml_function_coverage=1 00:26:35.747 --rc genhtml_legend=1 00:26:35.747 --rc geninfo_all_blocks=1 00:26:35.747 --rc geninfo_unexecuted_blocks=1 00:26:35.747 00:26:35.747 ' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.747 --rc genhtml_branch_coverage=1 00:26:35.747 --rc genhtml_function_coverage=1 00:26:35.747 --rc genhtml_legend=1 00:26:35.747 --rc geninfo_all_blocks=1 00:26:35.747 --rc geninfo_unexecuted_blocks=1 00:26:35.747 00:26:35.747 ' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.747 --rc genhtml_branch_coverage=1 00:26:35.747 --rc genhtml_function_coverage=1 00:26:35.747 --rc genhtml_legend=1 00:26:35.747 --rc geninfo_all_blocks=1 00:26:35.747 --rc geninfo_unexecuted_blocks=1 00:26:35.747 00:26:35.747 ' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.747 --rc genhtml_branch_coverage=1 00:26:35.747 --rc genhtml_function_coverage=1 00:26:35.747 --rc genhtml_legend=1 00:26:35.747 --rc geninfo_all_blocks=1 00:26:35.747 --rc geninfo_unexecuted_blocks=1 00:26:35.747 00:26:35.747 ' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:35.747 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91683 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91683 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91683 ']' 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:35.748 10:58:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:36.008 [2024-12-16 10:58:35.797738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:36.008 [2024-12-16 10:58:35.798023] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91683 ] 00:26:36.008 [2024-12-16 10:58:35.933462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.008 [2024-12-16 10:58:35.967500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:36.953 10:58:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:37.216 { 00:26:37.216 "name": "basen1", 00:26:37.216 "aliases": [ 00:26:37.216 "c352424d-38f8-4c75-9978-d68b0b714a28" 00:26:37.216 ], 00:26:37.216 "product_name": "NVMe disk", 00:26:37.216 "block_size": 4096, 00:26:37.216 "num_blocks": 1310720, 00:26:37.216 "uuid": "c352424d-38f8-4c75-9978-d68b0b714a28", 00:26:37.216 "numa_id": -1, 00:26:37.216 "assigned_rate_limits": { 00:26:37.216 "rw_ios_per_sec": 0, 00:26:37.216 "rw_mbytes_per_sec": 0, 00:26:37.216 "r_mbytes_per_sec": 0, 00:26:37.216 "w_mbytes_per_sec": 0 00:26:37.216 }, 00:26:37.216 "claimed": true, 00:26:37.216 "claim_type": "read_many_write_one", 00:26:37.216 "zoned": false, 00:26:37.216 "supported_io_types": { 00:26:37.216 "read": true, 00:26:37.216 "write": true, 00:26:37.216 "unmap": true, 00:26:37.216 "flush": true, 00:26:37.216 "reset": true, 00:26:37.216 "nvme_admin": true, 00:26:37.216 "nvme_io": true, 00:26:37.216 "nvme_io_md": false, 00:26:37.216 "write_zeroes": true, 00:26:37.216 "zcopy": false, 00:26:37.216 "get_zone_info": false, 00:26:37.216 "zone_management": false, 00:26:37.216 "zone_append": false, 00:26:37.216 "compare": true, 00:26:37.216 "compare_and_write": false, 00:26:37.216 "abort": true, 00:26:37.216 "seek_hole": false, 00:26:37.216 "seek_data": false, 00:26:37.216 "copy": true, 00:26:37.216 "nvme_iov_md": false 00:26:37.216 }, 00:26:37.216 "driver_specific": { 00:26:37.216 "nvme": [ 00:26:37.216 { 00:26:37.216 "pci_address": "0000:00:11.0", 00:26:37.216 "trid": { 00:26:37.216 "trtype": "PCIe", 00:26:37.216 "traddr": "0000:00:11.0" 00:26:37.216 }, 00:26:37.216 "ctrlr_data": { 00:26:37.216 "cntlid": 0, 00:26:37.216 "vendor_id": "0x1b36", 00:26:37.216 "model_number": "QEMU NVMe Ctrl", 00:26:37.216 "serial_number": "12341", 00:26:37.216 "firmware_revision": "8.0.0", 00:26:37.216 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:37.216 "oacs": { 00:26:37.216 "security": 0, 00:26:37.216 "format": 1, 00:26:37.216 "firmware": 0, 00:26:37.216 "ns_manage": 1 00:26:37.216 }, 00:26:37.216 "multi_ctrlr": false, 00:26:37.216 "ana_reporting": false 00:26:37.216 }, 00:26:37.216 "vs": { 00:26:37.216 "nvme_version": "1.4" 00:26:37.216 }, 00:26:37.216 "ns_data": { 00:26:37.216 "id": 1, 00:26:37.216 "can_share": false 00:26:37.216 } 00:26:37.216 } 00:26:37.216 ], 00:26:37.216 "mp_policy": "active_passive" 00:26:37.216 } 00:26:37.216 } 00:26:37.216 ]' 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:37.216 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:37.478 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=16835df1-8efd-47bd-b243-c9b0c173475e 00:26:37.478 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:37.478 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16835df1-8efd-47bd-b243-c9b0c173475e 00:26:37.740 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:38.002 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=4618fd72-ae85-4ed8-8969-40493c30bbf0 00:26:38.002 10:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 4618fd72-ae85-4ed8-8969-40493c30bbf0 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=b1f046b1-84ea-4117-a89f-449a52e2475a 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z b1f046b1-84ea-4117-a89f-449a52e2475a ]] 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 b1f046b1-84ea-4117-a89f-449a52e2475a 5120 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=b1f046b1-84ea-4117-a89f-449a52e2475a 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size b1f046b1-84ea-4117-a89f-449a52e2475a 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b1f046b1-84ea-4117-a89f-449a52e2475a 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:38.264 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f046b1-84ea-4117-a89f-449a52e2475a 00:26:38.526 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:38.526 { 00:26:38.526 "name": "b1f046b1-84ea-4117-a89f-449a52e2475a", 00:26:38.526 "aliases": [ 00:26:38.526 "lvs/basen1p0" 00:26:38.526 ], 00:26:38.526 "product_name": "Logical Volume", 00:26:38.526 "block_size": 4096, 00:26:38.526 "num_blocks": 5242880, 00:26:38.526 "uuid": "b1f046b1-84ea-4117-a89f-449a52e2475a", 00:26:38.526 "assigned_rate_limits": { 00:26:38.526 "rw_ios_per_sec": 0, 00:26:38.526 "rw_mbytes_per_sec": 0, 00:26:38.526 "r_mbytes_per_sec": 0, 00:26:38.526 "w_mbytes_per_sec": 0 00:26:38.526 }, 00:26:38.526 "claimed": false, 00:26:38.526 "zoned": false, 00:26:38.526 "supported_io_types": { 00:26:38.527 "read": true, 00:26:38.527 "write": true, 00:26:38.527 "unmap": true, 00:26:38.527 "flush": false, 00:26:38.527 "reset": true, 00:26:38.527 "nvme_admin": false, 00:26:38.527 "nvme_io": false, 00:26:38.527 "nvme_io_md": false, 00:26:38.527 "write_zeroes": true, 00:26:38.527 "zcopy": false, 00:26:38.527 "get_zone_info": false, 00:26:38.527 "zone_management": false, 00:26:38.527 "zone_append": false, 00:26:38.527 "compare": false, 00:26:38.527 "compare_and_write": false, 00:26:38.527 "abort": false, 00:26:38.527 "seek_hole": true, 00:26:38.527 "seek_data": true, 00:26:38.527 "copy": false, 00:26:38.527 "nvme_iov_md": false 00:26:38.527 }, 00:26:38.527 "driver_specific": { 00:26:38.527 "lvol": { 00:26:38.527 "lvol_store_uuid": "4618fd72-ae85-4ed8-8969-40493c30bbf0", 00:26:38.527 "base_bdev": "basen1", 00:26:38.527 "thin_provision": true, 00:26:38.527 "num_allocated_clusters": 0, 00:26:38.527 "snapshot": false, 00:26:38.527 "clone": false, 00:26:38.527 "esnap_clone": false 00:26:38.527 } 00:26:38.527 } 00:26:38.527 } 00:26:38.527 ]' 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:38.527 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:38.789 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:38.789 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:38.789 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:39.050 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:39.050 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:39.050 10:58:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d b1f046b1-84ea-4117-a89f-449a52e2475a -c cachen1p0 --l2p_dram_limit 2 00:26:39.313 [2024-12-16 10:58:39.048299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.048367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:39.313 [2024-12-16 10:58:39.048383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:39.313 [2024-12-16 10:58:39.048394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.048463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.048476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:39.313 [2024-12-16 10:58:39.048485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:39.313 [2024-12-16 10:58:39.048499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.048523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:39.313 [2024-12-16 10:58:39.048843] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:39.313 [2024-12-16 10:58:39.048864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.048875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:39.313 [2024-12-16 10:58:39.048886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:26:39.313 [2024-12-16 10:58:39.048897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.048964] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9711f87b-2f3a-4e62-9cc7-2dfc884de907 00:26:39.313 [2024-12-16 10:58:39.050705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.050754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:39.313 [2024-12-16 10:58:39.050767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:39.313 [2024-12-16 10:58:39.050775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.059490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.059532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:39.313 [2024-12-16 10:58:39.059549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.624 ms 00:26:39.313 [2024-12-16 10:58:39.059557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.059608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.059617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:39.313 [2024-12-16 10:58:39.059627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:39.313 [2024-12-16 10:58:39.059638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.059703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.059713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:39.313 [2024-12-16 10:58:39.059724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:39.313 [2024-12-16 10:58:39.059732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.059759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:39.313 [2024-12-16 10:58:39.062079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.062271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:39.313 [2024-12-16 10:58:39.062293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.329 ms 00:26:39.313 [2024-12-16 10:58:39.062304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.062342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.062354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:39.313 [2024-12-16 10:58:39.062370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:39.313 [2024-12-16 10:58:39.062383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.062401] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:39.313 [2024-12-16 10:58:39.062556] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:39.313 [2024-12-16 10:58:39.062572] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:39.313 [2024-12-16 10:58:39.062586] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:39.313 [2024-12-16 10:58:39.062600] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:39.313 [2024-12-16 10:58:39.062612] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:39.313 [2024-12-16 10:58:39.062622] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:39.313 [2024-12-16 10:58:39.062642] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:39.313 [2024-12-16 10:58:39.062650] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:39.313 [2024-12-16 10:58:39.062663] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:39.313 [2024-12-16 10:58:39.062703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.062713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:39.313 [2024-12-16 10:58:39.062727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:26:39.313 [2024-12-16 10:58:39.062737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.313 [2024-12-16 10:58:39.062822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.313 [2024-12-16 10:58:39.062836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:39.313 [2024-12-16 10:58:39.062845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:26:39.314 [2024-12-16 10:58:39.062857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.314 [2024-12-16 10:58:39.062971] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:39.314 [2024-12-16 10:58:39.062989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:39.314 [2024-12-16 10:58:39.062997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:39.314 [2024-12-16 10:58:39.063013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:39.314 [2024-12-16 10:58:39.063033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:39.314 [2024-12-16 10:58:39.063049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:39.314 [2024-12-16 10:58:39.063057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:39.314 [2024-12-16 10:58:39.063068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:39.314 [2024-12-16 10:58:39.063086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:39.314 [2024-12-16 10:58:39.063095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:39.314 [2024-12-16 10:58:39.063116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:39.314 [2024-12-16 10:58:39.063126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:39.314 [2024-12-16 10:58:39.063142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:39.314 [2024-12-16 10:58:39.063148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:39.314 [2024-12-16 10:58:39.063168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:39.314 [2024-12-16 10:58:39.063177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:39.314 [2024-12-16 10:58:39.063187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:39.314 [2024-12-16 10:58:39.063196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:39.314 [2024-12-16 10:58:39.063205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:39.314 [2024-12-16 10:58:39.063214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:39.314 [2024-12-16 10:58:39.063220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:39.314 [2024-12-16 10:58:39.063229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:39.314 [2024-12-16 10:58:39.063235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:39.314 [2024-12-16 10:58:39.063248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:39.314 [2024-12-16 10:58:39.063256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:39.314 [2024-12-16 10:58:39.063264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:39.314 [2024-12-16 10:58:39.063271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:39.314 [2024-12-16 10:58:39.063279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:39.314 [2024-12-16 10:58:39.063295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:39.314 [2024-12-16 10:58:39.063302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:39.314 [2024-12-16 10:58:39.063317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:39.314 [2024-12-16 10:58:39.063343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:39.314 [2024-12-16 10:58:39.063351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063359] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:39.314 [2024-12-16 10:58:39.063369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:39.314 [2024-12-16 10:58:39.063380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:39.314 [2024-12-16 10:58:39.063387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.314 [2024-12-16 10:58:39.063397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:39.314 [2024-12-16 10:58:39.063411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:39.314 [2024-12-16 10:58:39.063421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:39.314 [2024-12-16 10:58:39.063427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:39.314 [2024-12-16 10:58:39.063435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:39.314 [2024-12-16 10:58:39.063442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:39.314 [2024-12-16 10:58:39.063456] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:39.314 [2024-12-16 10:58:39.063468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:39.314 [2024-12-16 10:58:39.063489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:39.314 [2024-12-16 10:58:39.063516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:39.314 [2024-12-16 10:58:39.063526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:39.314 [2024-12-16 10:58:39.063537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:39.314 [2024-12-16 10:58:39.063544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:39.314 [2024-12-16 10:58:39.063607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:39.314 [2024-12-16 10:58:39.063622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:39.314 [2024-12-16 10:58:39.063640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:39.314 [2024-12-16 10:58:39.063649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:39.314 [2024-12-16 10:58:39.063656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:39.314 [2024-12-16 10:58:39.063667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.314 [2024-12-16 10:58:39.063675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:39.314 [2024-12-16 10:58:39.063690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.778 ms 00:26:39.314 [2024-12-16 10:58:39.063697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.314 [2024-12-16 10:58:39.063761] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:39.314 [2024-12-16 10:58:39.063777] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:42.619 [2024-12-16 10:58:42.271139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.271219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:42.619 [2024-12-16 10:58:42.271244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3207.357 ms 00:26:42.619 [2024-12-16 10:58:42.271254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.286669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.286895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:42.619 [2024-12-16 10:58:42.286924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.307 ms 00:26:42.619 [2024-12-16 10:58:42.286956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.287019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.287030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:42.619 [2024-12-16 10:58:42.287048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:42.619 [2024-12-16 10:58:42.287058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.299390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.299439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:42.619 [2024-12-16 10:58:42.299454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.270 ms 00:26:42.619 [2024-12-16 10:58:42.299463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.299501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.299510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:42.619 [2024-12-16 10:58:42.299525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:42.619 [2024-12-16 10:58:42.299533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.300127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.300163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:42.619 [2024-12-16 10:58:42.300178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:26:42.619 [2024-12-16 10:58:42.300195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.300255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.300268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:42.619 [2024-12-16 10:58:42.300281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:42.619 [2024-12-16 10:58:42.300294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.324365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.324774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:42.619 [2024-12-16 10:58:42.324837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.032 ms 00:26:42.619 [2024-12-16 10:58:42.324861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.335296] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:42.619 [2024-12-16 10:58:42.336782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.336835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:42.619 [2024-12-16 10:58:42.336847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.633 ms 00:26:42.619 [2024-12-16 10:58:42.336858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.354599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.354792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:42.619 [2024-12-16 10:58:42.354813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.705 ms 00:26:42.619 [2024-12-16 10:58:42.354828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.354955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.354970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:42.619 [2024-12-16 10:58:42.354979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:26:42.619 [2024-12-16 10:58:42.354991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.360223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.360290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:42.619 [2024-12-16 10:58:42.360302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.207 ms 00:26:42.619 [2024-12-16 10:58:42.360314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.365397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.365588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:42.619 [2024-12-16 10:58:42.365608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.027 ms 00:26:42.619 [2024-12-16 10:58:42.365619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.365998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.366017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:42.619 [2024-12-16 10:58:42.366029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:26:42.619 [2024-12-16 10:58:42.366042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.406901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.406981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:42.619 [2024-12-16 10:58:42.406995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.833 ms 00:26:42.619 [2024-12-16 10:58:42.407007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.414590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.414657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:42.619 [2024-12-16 10:58:42.414669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.494 ms 00:26:42.619 [2024-12-16 10:58:42.414681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.420798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.421009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:42.619 [2024-12-16 10:58:42.421029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.064 ms 00:26:42.619 [2024-12-16 10:58:42.421039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.427458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.427515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:42.619 [2024-12-16 10:58:42.427527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.326 ms 00:26:42.619 [2024-12-16 10:58:42.427540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.427602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.427615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:42.619 [2024-12-16 10:58:42.427624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:42.619 [2024-12-16 10:58:42.427634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.427712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.619 [2024-12-16 10:58:42.427726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:42.619 [2024-12-16 10:58:42.427735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:42.619 [2024-12-16 10:58:42.427745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.619 [2024-12-16 10:58:42.429047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3380.207 ms, result 0 00:26:42.619 { 00:26:42.619 "name": "ftl", 00:26:42.619 "uuid": "9711f87b-2f3a-4e62-9cc7-2dfc884de907" 00:26:42.619 } 00:26:42.619 10:58:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:42.880 [2024-12-16 10:58:42.642534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.880 10:58:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:43.141 10:58:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:43.141 [2024-12-16 10:58:43.083036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:43.141 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:43.402 [2024-12-16 10:58:43.291428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:43.402 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:43.975 Fill FTL, iteration 1 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91807 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91807 /var/tmp/spdk.tgt.sock 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91807 ']' 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:43.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:43.975 10:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:43.975 [2024-12-16 10:58:43.743821] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:43.975 [2024-12-16 10:58:43.744475] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91807 ] 00:26:43.975 [2024-12-16 10:58:43.880317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.975 [2024-12-16 10:58:43.930740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.921 10:58:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.921 10:58:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:44.921 10:58:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:44.921 ftln1 00:26:44.921 10:58:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:44.922 10:58:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91807 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91807 ']' 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91807 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91807 00:26:45.181 killing process with pid 91807 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91807' 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91807 00:26:45.181 10:58:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91807 00:26:45.748 10:58:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:45.748 10:58:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:45.748 [2024-12-16 10:58:45.541143] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:45.748 [2024-12-16 10:58:45.541524] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91838 ] 00:26:45.748 [2024-12-16 10:58:45.677541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.748 [2024-12-16 10:58:45.719584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.131  [2024-12-16T10:58:48.063Z] Copying: 208/1024 [MB] (208 MBps) [2024-12-16T10:58:49.007Z] Copying: 442/1024 [MB] (234 MBps) [2024-12-16T10:58:49.951Z] Copying: 676/1024 [MB] (234 MBps) [2024-12-16T10:58:50.523Z] Copying: 911/1024 [MB] (235 MBps) [2024-12-16T10:58:50.783Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:26:50.794 00:26:50.794 Calculate MD5 checksum, iteration 1 00:26:50.794 10:58:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:50.794 10:58:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:50.794 10:58:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:50.794 10:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:50.794 10:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:50.794 10:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:50.794 10:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:50.794 10:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:50.794 [2024-12-16 10:58:50.647761] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:50.794 [2024-12-16 10:58:50.648423] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91896 ] 00:26:51.055 [2024-12-16 10:58:50.783015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.055 [2024-12-16 10:58:50.841468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.443  [2024-12-16T10:58:52.693Z] Copying: 655/1024 [MB] (655 MBps) [2024-12-16T10:58:52.954Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:26:52.965 00:26:52.965 10:58:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:52.965 10:58:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:54.871 Fill FTL, iteration 2 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=74f941d41762230fe0fc55ff9eccf28a 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:54.871 10:58:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:54.871 [2024-12-16 10:58:54.731745] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:54.871 [2024-12-16 10:58:54.731853] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91942 ] 00:26:55.130 [2024-12-16 10:58:54.868526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.130 [2024-12-16 10:58:54.901640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.516  [2024-12-16T10:58:57.447Z] Copying: 190/1024 [MB] (190 MBps) [2024-12-16T10:58:58.382Z] Copying: 373/1024 [MB] (183 MBps) [2024-12-16T10:58:59.315Z] Copying: 611/1024 [MB] (238 MBps) [2024-12-16T10:58:59.883Z] Copying: 864/1024 [MB] (253 MBps) [2024-12-16T10:58:59.883Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:26:59.894 00:27:00.158 10:58:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:00.158 10:58:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:00.158 Calculate MD5 checksum, iteration 2 00:27:00.158 10:58:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:00.158 10:58:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:00.158 10:58:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:00.158 10:58:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:00.158 10:58:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:00.158 10:58:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:00.158 [2024-12-16 10:58:59.941109] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:00.158 [2024-12-16 10:58:59.941334] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91999 ] 00:27:00.158 [2024-12-16 10:59:00.077743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.158 [2024-12-16 10:59:00.114814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.542  [2024-12-16T10:59:02.474Z] Copying: 619/1024 [MB] (619 MBps) [2024-12-16T10:59:03.046Z] Copying: 1024/1024 [MB] (average 587 MBps) 00:27:03.057 00:27:03.057 10:59:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:03.057 10:59:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:05.638 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:05.638 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d322b80ca65eca0784b0918d435b8c3e 00:27:05.638 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:05.638 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:05.638 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:05.638 [2024-12-16 10:59:05.265020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.638 [2024-12-16 10:59:05.265141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:05.638 [2024-12-16 10:59:05.265192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:05.638 [2024-12-16 10:59:05.265212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.638 [2024-12-16 10:59:05.265244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.638 [2024-12-16 10:59:05.265261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:05.638 [2024-12-16 10:59:05.265281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:05.638 [2024-12-16 10:59:05.265301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.638 [2024-12-16 10:59:05.265365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.638 [2024-12-16 10:59:05.265385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:05.638 [2024-12-16 10:59:05.265402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:05.638 [2024-12-16 10:59:05.265443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.639 [2024-12-16 10:59:05.265518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.479 ms, result 0 00:27:05.639 true 00:27:05.639 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:05.639 { 00:27:05.639 "name": "ftl", 00:27:05.639 "properties": [ 00:27:05.639 { 00:27:05.639 "name": "superblock_version", 00:27:05.639 "value": 5, 00:27:05.639 "read-only": true 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "name": "base_device", 00:27:05.639 "bands": [ 00:27:05.639 { 00:27:05.639 "id": 0, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 1, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 2, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 3, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 4, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 5, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 6, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 7, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 8, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 9, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 10, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 11, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 12, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 13, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 14, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 15, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 16, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 17, 00:27:05.639 "state": "FREE", 00:27:05.639 "validity": 0.0 00:27:05.639 } 00:27:05.639 ], 00:27:05.639 "read-only": true 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "name": "cache_device", 00:27:05.639 "type": "bdev", 00:27:05.639 "chunks": [ 00:27:05.639 { 00:27:05.639 "id": 0, 00:27:05.639 "state": "INACTIVE", 00:27:05.639 "utilization": 0.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 1, 00:27:05.639 "state": "CLOSED", 00:27:05.639 "utilization": 1.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 2, 00:27:05.639 "state": "CLOSED", 00:27:05.639 "utilization": 1.0 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 3, 00:27:05.639 "state": "OPEN", 00:27:05.639 "utilization": 0.001953125 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "id": 4, 00:27:05.639 "state": "OPEN", 00:27:05.639 "utilization": 0.0 00:27:05.639 } 00:27:05.639 ], 00:27:05.639 "read-only": true 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "name": "verbose_mode", 00:27:05.639 "value": true, 00:27:05.639 "unit": "", 00:27:05.639 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:05.639 }, 00:27:05.639 { 00:27:05.639 "name": "prep_upgrade_on_shutdown", 00:27:05.639 "value": false, 00:27:05.639 "unit": "", 00:27:05.639 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:05.639 } 00:27:05.639 ] 00:27:05.639 } 00:27:05.639 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:05.899 [2024-12-16 10:59:05.673340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.899 [2024-12-16 10:59:05.673443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:05.899 [2024-12-16 10:59:05.673484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:05.899 [2024-12-16 10:59:05.673502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.899 [2024-12-16 10:59:05.673531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.899 [2024-12-16 10:59:05.673548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:05.899 [2024-12-16 10:59:05.673563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:05.899 [2024-12-16 10:59:05.673577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.899 [2024-12-16 10:59:05.673600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.899 [2024-12-16 10:59:05.673616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:05.899 [2024-12-16 10:59:05.673632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:05.899 [2024-12-16 10:59:05.673668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.899 [2024-12-16 10:59:05.673713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.361 ms, result 0 00:27:05.899 true 00:27:05.899 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:05.899 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:05.899 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:06.160 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:06.160 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:06.160 10:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:06.160 [2024-12-16 10:59:06.080418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.160 [2024-12-16 10:59:06.080454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:06.160 [2024-12-16 10:59:06.080463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:06.160 [2024-12-16 10:59:06.080469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.160 [2024-12-16 10:59:06.080486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.160 [2024-12-16 10:59:06.080492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:06.160 [2024-12-16 10:59:06.080498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:06.160 [2024-12-16 10:59:06.080504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.160 [2024-12-16 10:59:06.080518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.160 [2024-12-16 10:59:06.080524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:06.160 [2024-12-16 10:59:06.080531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:06.160 [2024-12-16 10:59:06.080536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.160 [2024-12-16 10:59:06.080576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.149 ms, result 0 00:27:06.160 true 00:27:06.160 10:59:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:06.422 { 00:27:06.422 "name": "ftl", 00:27:06.422 "properties": [ 00:27:06.422 { 00:27:06.422 "name": "superblock_version", 00:27:06.422 "value": 5, 00:27:06.422 "read-only": true 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "name": "base_device", 00:27:06.422 "bands": [ 00:27:06.422 { 00:27:06.422 "id": 0, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 1, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 2, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 3, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 4, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 5, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 6, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 7, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 8, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 9, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 10, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 11, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 12, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 13, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 14, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 15, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 16, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 17, 00:27:06.422 "state": "FREE", 00:27:06.422 "validity": 0.0 00:27:06.422 } 00:27:06.422 ], 00:27:06.422 "read-only": true 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "name": "cache_device", 00:27:06.422 "type": "bdev", 00:27:06.422 "chunks": [ 00:27:06.422 { 00:27:06.422 "id": 0, 00:27:06.422 "state": "INACTIVE", 00:27:06.422 "utilization": 0.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 1, 00:27:06.422 "state": "CLOSED", 00:27:06.422 "utilization": 1.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 2, 00:27:06.422 "state": "CLOSED", 00:27:06.422 "utilization": 1.0 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 3, 00:27:06.422 "state": "OPEN", 00:27:06.422 "utilization": 0.001953125 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "id": 4, 00:27:06.422 "state": "OPEN", 00:27:06.422 "utilization": 0.0 00:27:06.422 } 00:27:06.422 ], 00:27:06.422 "read-only": true 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "name": "verbose_mode", 00:27:06.422 "value": true, 00:27:06.422 "unit": "", 00:27:06.422 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:06.422 }, 00:27:06.422 { 00:27:06.422 "name": "prep_upgrade_on_shutdown", 00:27:06.422 "value": true, 00:27:06.422 "unit": "", 00:27:06.422 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:06.422 } 00:27:06.422 ] 00:27:06.422 } 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91683 ]] 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91683 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91683 ']' 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91683 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91683 00:27:06.422 killing process with pid 91683 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91683' 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91683 00:27:06.422 10:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91683 00:27:06.684 [2024-12-16 10:59:06.443657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:06.684 [2024-12-16 10:59:06.447257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.684 [2024-12-16 10:59:06.447290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:06.684 [2024-12-16 10:59:06.447301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:06.684 [2024-12-16 10:59:06.447307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.684 [2024-12-16 10:59:06.447329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:06.684 [2024-12-16 10:59:06.447832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.684 [2024-12-16 10:59:06.447849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:06.684 [2024-12-16 10:59:06.447857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.492 ms 00:27:06.684 [2024-12-16 10:59:06.447864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.839765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.839830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:16.691 [2024-12-16 10:59:14.839844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8391.846 ms 00:27:16.691 [2024-12-16 10:59:14.839855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.841014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.841079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:16.691 [2024-12-16 10:59:14.841090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.146 ms 00:27:16.691 [2024-12-16 10:59:14.841096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.841957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.841968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:16.691 [2024-12-16 10:59:14.841984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:27:16.691 [2024-12-16 10:59:14.841994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.844584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.844710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:16.691 [2024-12-16 10:59:14.844734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.558 ms 00:27:16.691 [2024-12-16 10:59:14.844741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.847312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.847344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:16.691 [2024-12-16 10:59:14.847353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.547 ms 00:27:16.691 [2024-12-16 10:59:14.847359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.847413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.847420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:16.691 [2024-12-16 10:59:14.847432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:16.691 [2024-12-16 10:59:14.847439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.848777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.848867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:16.691 [2024-12-16 10:59:14.848911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.326 ms 00:27:16.691 [2024-12-16 10:59:14.848939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.850196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.850286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:16.691 [2024-12-16 10:59:14.850330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.224 ms 00:27:16.691 [2024-12-16 10:59:14.850346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.851563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.851651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:16.691 [2024-12-16 10:59:14.851694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.186 ms 00:27:16.691 [2024-12-16 10:59:14.851711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.691 [2024-12-16 10:59:14.854005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.691 [2024-12-16 10:59:14.854341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:16.691 [2024-12-16 10:59:14.854391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.230 ms 00:27:16.692 [2024-12-16 10:59:14.854415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.854500] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:16.692 [2024-12-16 10:59:14.854539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:16.692 [2024-12-16 10:59:14.854568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:16.692 [2024-12-16 10:59:14.854593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:16.692 [2024-12-16 10:59:14.854617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.854981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:16.692 [2024-12-16 10:59:14.855010] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:16.692 [2024-12-16 10:59:14.855034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9711f87b-2f3a-4e62-9cc7-2dfc884de907 00:27:16.692 [2024-12-16 10:59:14.855057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:16.692 [2024-12-16 10:59:14.855079] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:16.692 [2024-12-16 10:59:14.855101] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:16.692 [2024-12-16 10:59:14.855124] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:16.692 [2024-12-16 10:59:14.855145] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:16.692 [2024-12-16 10:59:14.855184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:16.692 [2024-12-16 10:59:14.855206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:16.692 [2024-12-16 10:59:14.855225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:16.692 [2024-12-16 10:59:14.855245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:16.692 [2024-12-16 10:59:14.855268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.692 [2024-12-16 10:59:14.855292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:16.692 [2024-12-16 10:59:14.855316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.770 ms 00:27:16.692 [2024-12-16 10:59:14.855338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.858049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.692 [2024-12-16 10:59:14.858083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:16.692 [2024-12-16 10:59:14.858094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.664 ms 00:27:16.692 [2024-12-16 10:59:14.858109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.858208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.692 [2024-12-16 10:59:14.858219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:16.692 [2024-12-16 10:59:14.858230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:27:16.692 [2024-12-16 10:59:14.858239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.865033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.865067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:16.692 [2024-12-16 10:59:14.865083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.865097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.865130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.865139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:16.692 [2024-12-16 10:59:14.865149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.865164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.865233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.865245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:16.692 [2024-12-16 10:59:14.865254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.865266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.865288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.865297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:16.692 [2024-12-16 10:59:14.865306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.865314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.876729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.876767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:16.692 [2024-12-16 10:59:14.876777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.876790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.886260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.886299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:16.692 [2024-12-16 10:59:14.886310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.886326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.886393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.886406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:16.692 [2024-12-16 10:59:14.886415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.886423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.886459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.886468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:16.692 [2024-12-16 10:59:14.886477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.886486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.886554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.886564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:16.692 [2024-12-16 10:59:14.886571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.886579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.886613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.886625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:16.692 [2024-12-16 10:59:14.886635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.886642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.886687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.886697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:16.692 [2024-12-16 10:59:14.886706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.886714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.886765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:16.692 [2024-12-16 10:59:14.886776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:16.692 [2024-12-16 10:59:14.886784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:16.692 [2024-12-16 10:59:14.886791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.692 [2024-12-16 10:59:14.886951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8439.607 ms, result 0 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:19.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92182 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92182 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92182 ']' 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:19.997 10:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:20.258 [2024-12-16 10:59:20.012730] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:20.258 [2024-12-16 10:59:20.013104] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92182 ] 00:27:20.258 [2024-12-16 10:59:20.150282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.258 [2024-12-16 10:59:20.223103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.832 [2024-12-16 10:59:20.646456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:20.832 [2024-12-16 10:59:20.646558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:20.832 [2024-12-16 10:59:20.799453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.832 [2024-12-16 10:59:20.799519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:20.832 [2024-12-16 10:59:20.799541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:20.832 [2024-12-16 10:59:20.799551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.832 [2024-12-16 10:59:20.799623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.832 [2024-12-16 10:59:20.799634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:20.832 [2024-12-16 10:59:20.799643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:27:20.832 [2024-12-16 10:59:20.799652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.832 [2024-12-16 10:59:20.799679] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:20.832 [2024-12-16 10:59:20.799995] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:20.832 [2024-12-16 10:59:20.800015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.832 [2024-12-16 10:59:20.800024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:20.832 [2024-12-16 10:59:20.800037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.341 ms 00:27:20.832 [2024-12-16 10:59:20.800046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.832 [2024-12-16 10:59:20.802352] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:20.832 [2024-12-16 10:59:20.807537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.832 [2024-12-16 10:59:20.807604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:20.832 [2024-12-16 10:59:20.807617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.188 ms 00:27:20.832 [2024-12-16 10:59:20.807629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.832 [2024-12-16 10:59:20.807718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.832 [2024-12-16 10:59:20.807728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:20.832 [2024-12-16 10:59:20.807738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:20.832 [2024-12-16 10:59:20.807748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.095 [2024-12-16 10:59:20.819419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.095 [2024-12-16 10:59:20.819467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:21.095 [2024-12-16 10:59:20.819483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.601 ms 00:27:21.095 [2024-12-16 10:59:20.819492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.095 [2024-12-16 10:59:20.819549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.095 [2024-12-16 10:59:20.819561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:21.095 [2024-12-16 10:59:20.819571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:21.095 [2024-12-16 10:59:20.819580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.095 [2024-12-16 10:59:20.819648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.095 [2024-12-16 10:59:20.819667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:21.095 [2024-12-16 10:59:20.819680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:21.095 [2024-12-16 10:59:20.819690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.095 [2024-12-16 10:59:20.819722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:21.095 [2024-12-16 10:59:20.822533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.095 [2024-12-16 10:59:20.822576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:21.095 [2024-12-16 10:59:20.822586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.820 ms 00:27:21.095 [2024-12-16 10:59:20.822595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.095 [2024-12-16 10:59:20.822627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.095 [2024-12-16 10:59:20.822638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:21.095 [2024-12-16 10:59:20.822648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:21.095 [2024-12-16 10:59:20.822661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.095 [2024-12-16 10:59:20.822687] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:21.095 [2024-12-16 10:59:20.822715] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:21.095 [2024-12-16 10:59:20.822756] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:21.095 [2024-12-16 10:59:20.822774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:21.095 [2024-12-16 10:59:20.822889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:21.095 [2024-12-16 10:59:20.822905] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:21.095 [2024-12-16 10:59:20.822923] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:21.095 [2024-12-16 10:59:20.822960] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:21.095 [2024-12-16 10:59:20.822971] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:21.095 [2024-12-16 10:59:20.822987] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:21.095 [2024-12-16 10:59:20.822995] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:21.095 [2024-12-16 10:59:20.823003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:21.095 [2024-12-16 10:59:20.823013] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:21.095 [2024-12-16 10:59:20.823024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.095 [2024-12-16 10:59:20.823034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:21.095 [2024-12-16 10:59:20.823043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:27:21.095 [2024-12-16 10:59:20.823051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.095 [2024-12-16 10:59:20.823140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.095 [2024-12-16 10:59:20.823151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:21.095 [2024-12-16 10:59:20.823159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:21.095 [2024-12-16 10:59:20.823167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.095 [2024-12-16 10:59:20.823275] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:21.095 [2024-12-16 10:59:20.823288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:21.095 [2024-12-16 10:59:20.823298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:21.095 [2024-12-16 10:59:20.823310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.095 [2024-12-16 10:59:20.823320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:21.095 [2024-12-16 10:59:20.823328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:21.095 [2024-12-16 10:59:20.823337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:21.095 [2024-12-16 10:59:20.823346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:21.095 [2024-12-16 10:59:20.823358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:21.095 [2024-12-16 10:59:20.823368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.095 [2024-12-16 10:59:20.823376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:21.095 [2024-12-16 10:59:20.823384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:21.095 [2024-12-16 10:59:20.823392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.095 [2024-12-16 10:59:20.823402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:21.095 [2024-12-16 10:59:20.823412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:21.095 [2024-12-16 10:59:20.823421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.095 [2024-12-16 10:59:20.823435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:21.095 [2024-12-16 10:59:20.823444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:21.095 [2024-12-16 10:59:20.823452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.095 [2024-12-16 10:59:20.823460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:21.095 [2024-12-16 10:59:20.823467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:21.095 [2024-12-16 10:59:20.823475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:21.095 [2024-12-16 10:59:20.823485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:21.095 [2024-12-16 10:59:20.823494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:21.095 [2024-12-16 10:59:20.823501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:21.095 [2024-12-16 10:59:20.823509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:21.095 [2024-12-16 10:59:20.823517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:21.095 [2024-12-16 10:59:20.823524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:21.095 [2024-12-16 10:59:20.823535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:21.095 [2024-12-16 10:59:20.823543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:21.095 [2024-12-16 10:59:20.823551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:21.095 [2024-12-16 10:59:20.823557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:21.095 [2024-12-16 10:59:20.823567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:21.095 [2024-12-16 10:59:20.823574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.095 [2024-12-16 10:59:20.823580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:21.095 [2024-12-16 10:59:20.823587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:21.095 [2024-12-16 10:59:20.823596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.095 [2024-12-16 10:59:20.823604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:21.095 [2024-12-16 10:59:20.823610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:21.096 [2024-12-16 10:59:20.823617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.096 [2024-12-16 10:59:20.823624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:21.096 [2024-12-16 10:59:20.823630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:21.096 [2024-12-16 10:59:20.823637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.096 [2024-12-16 10:59:20.823643] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:21.096 [2024-12-16 10:59:20.823656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:21.096 [2024-12-16 10:59:20.823666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:21.096 [2024-12-16 10:59:20.823678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:21.096 [2024-12-16 10:59:20.823687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:21.096 [2024-12-16 10:59:20.823697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:21.096 [2024-12-16 10:59:20.823704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:21.096 [2024-12-16 10:59:20.823711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:21.096 [2024-12-16 10:59:20.823719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:21.096 [2024-12-16 10:59:20.823726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:21.096 [2024-12-16 10:59:20.823735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:21.096 [2024-12-16 10:59:20.823744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:21.096 [2024-12-16 10:59:20.823760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:21.096 [2024-12-16 10:59:20.823785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:21.096 [2024-12-16 10:59:20.823794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:21.096 [2024-12-16 10:59:20.823801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:21.096 [2024-12-16 10:59:20.823808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:21.096 [2024-12-16 10:59:20.823864] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:21.096 [2024-12-16 10:59:20.823872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:21.096 [2024-12-16 10:59:20.823891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:21.096 [2024-12-16 10:59:20.823898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:21.096 [2024-12-16 10:59:20.823906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:21.096 [2024-12-16 10:59:20.823924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.096 [2024-12-16 10:59:20.823949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:21.096 [2024-12-16 10:59:20.823964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.722 ms 00:27:21.096 [2024-12-16 10:59:20.823972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.096 [2024-12-16 10:59:20.824045] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:21.096 [2024-12-16 10:59:20.824063] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:25.310 [2024-12-16 10:59:24.413425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.310 [2024-12-16 10:59:24.413503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:25.310 [2024-12-16 10:59:24.413531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3589.363 ms 00:27:25.310 [2024-12-16 10:59:24.413541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.310 [2024-12-16 10:59:24.429706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.429767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:25.311 [2024-12-16 10:59:24.429782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.037 ms 00:27:25.311 [2024-12-16 10:59:24.429791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.429851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.429861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:25.311 [2024-12-16 10:59:24.429870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:25.311 [2024-12-16 10:59:24.429879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.452567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.452628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:25.311 [2024-12-16 10:59:24.452643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.608 ms 00:27:25.311 [2024-12-16 10:59:24.452652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.452708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.452731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:25.311 [2024-12-16 10:59:24.452742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:25.311 [2024-12-16 10:59:24.452751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.453470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.453504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:25.311 [2024-12-16 10:59:24.453518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.662 ms 00:27:25.311 [2024-12-16 10:59:24.453530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.453587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.453601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:25.311 [2024-12-16 10:59:24.453613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:25.311 [2024-12-16 10:59:24.453622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.461879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.461920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:25.311 [2024-12-16 10:59:24.461956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.221 ms 00:27:25.311 [2024-12-16 10:59:24.461966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.465972] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:25.311 [2024-12-16 10:59:24.466035] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:25.311 [2024-12-16 10:59:24.466053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.466074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:25.311 [2024-12-16 10:59:24.466085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.992 ms 00:27:25.311 [2024-12-16 10:59:24.466095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.471077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.471248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:25.311 [2024-12-16 10:59:24.471275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.922 ms 00:27:25.311 [2024-12-16 10:59:24.471285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.473532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.473573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:25.311 [2024-12-16 10:59:24.473583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.162 ms 00:27:25.311 [2024-12-16 10:59:24.473590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.475586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.475622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:25.311 [2024-12-16 10:59:24.475632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.953 ms 00:27:25.311 [2024-12-16 10:59:24.475639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.476047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.476066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:25.311 [2024-12-16 10:59:24.476078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.314 ms 00:27:25.311 [2024-12-16 10:59:24.476086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.494522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.494686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:25.311 [2024-12-16 10:59:24.494713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.410 ms 00:27:25.311 [2024-12-16 10:59:24.494722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.504100] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:25.311 [2024-12-16 10:59:24.504880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.504918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:25.311 [2024-12-16 10:59:24.504944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.118 ms 00:27:25.311 [2024-12-16 10:59:24.504956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.505043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.505054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:25.311 [2024-12-16 10:59:24.505063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:25.311 [2024-12-16 10:59:24.505075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.505135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.505145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:25.311 [2024-12-16 10:59:24.505157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:25.311 [2024-12-16 10:59:24.505165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.505191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.505200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:25.311 [2024-12-16 10:59:24.505208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:25.311 [2024-12-16 10:59:24.505216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.505250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:25.311 [2024-12-16 10:59:24.505260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.505268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:25.311 [2024-12-16 10:59:24.505276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:25.311 [2024-12-16 10:59:24.505284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.509297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.509341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:25.311 [2024-12-16 10:59:24.509352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.994 ms 00:27:25.311 [2024-12-16 10:59:24.509359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.509437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.311 [2024-12-16 10:59:24.509451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:25.311 [2024-12-16 10:59:24.509460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:25.311 [2024-12-16 10:59:24.509467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.311 [2024-12-16 10:59:24.510537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3710.656 ms, result 0 00:27:25.311 [2024-12-16 10:59:24.525945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.312 [2024-12-16 10:59:24.541689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:25.312 [2024-12-16 10:59:24.549806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:25.312 10:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.312 10:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:25.312 10:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:25.312 10:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:25.312 10:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:25.312 [2024-12-16 10:59:24.793922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.312 [2024-12-16 10:59:24.793995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:25.312 [2024-12-16 10:59:24.794012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:25.312 [2024-12-16 10:59:24.794021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.312 [2024-12-16 10:59:24.794047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.312 [2024-12-16 10:59:24.794057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:25.312 [2024-12-16 10:59:24.794066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:25.312 [2024-12-16 10:59:24.794074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.312 [2024-12-16 10:59:24.794099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.312 [2024-12-16 10:59:24.794114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:25.312 [2024-12-16 10:59:24.794123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:25.312 [2024-12-16 10:59:24.794131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.312 [2024-12-16 10:59:24.794195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.270 ms, result 0 00:27:25.312 true 00:27:25.312 10:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:25.312 { 00:27:25.312 "name": "ftl", 00:27:25.312 "properties": [ 00:27:25.312 { 00:27:25.312 "name": "superblock_version", 00:27:25.312 "value": 5, 00:27:25.312 "read-only": true 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "name": "base_device", 00:27:25.312 "bands": [ 00:27:25.312 { 00:27:25.312 "id": 0, 00:27:25.312 "state": "CLOSED", 00:27:25.312 "validity": 1.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 1, 00:27:25.312 "state": "CLOSED", 00:27:25.312 "validity": 1.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 2, 00:27:25.312 "state": "CLOSED", 00:27:25.312 "validity": 0.007843137254901933 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 3, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 4, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 5, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 6, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 7, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 8, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 9, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 10, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 11, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 12, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 13, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 14, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 15, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 16, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 17, 00:27:25.312 "state": "FREE", 00:27:25.312 "validity": 0.0 00:27:25.312 } 00:27:25.312 ], 00:27:25.312 "read-only": true 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "name": "cache_device", 00:27:25.312 "type": "bdev", 00:27:25.312 "chunks": [ 00:27:25.312 { 00:27:25.312 "id": 0, 00:27:25.312 "state": "INACTIVE", 00:27:25.312 "utilization": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 1, 00:27:25.312 "state": "OPEN", 00:27:25.312 "utilization": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 2, 00:27:25.312 "state": "OPEN", 00:27:25.312 "utilization": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 3, 00:27:25.312 "state": "FREE", 00:27:25.312 "utilization": 0.0 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "id": 4, 00:27:25.312 "state": "FREE", 00:27:25.312 "utilization": 0.0 00:27:25.312 } 00:27:25.312 ], 00:27:25.312 "read-only": true 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "name": "verbose_mode", 00:27:25.312 "value": true, 00:27:25.312 "unit": "", 00:27:25.312 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:25.312 }, 00:27:25.312 { 00:27:25.312 "name": "prep_upgrade_on_shutdown", 00:27:25.312 "value": false, 00:27:25.312 "unit": "", 00:27:25.312 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:25.312 } 00:27:25.312 ] 00:27:25.312 } 00:27:25.312 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:25.313 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:25.313 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:25.313 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:25.313 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:25.313 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:25.313 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:25.313 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:25.574 Validate MD5 checksum, iteration 1 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:25.574 10:59:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:25.574 [2024-12-16 10:59:25.542877] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:25.574 [2024-12-16 10:59:25.543419] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92251 ] 00:27:25.835 [2024-12-16 10:59:25.678814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.835 [2024-12-16 10:59:25.730287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.223  [2024-12-16T10:59:27.784Z] Copying: 666/1024 [MB] (666 MBps) [2024-12-16T10:59:28.354Z] Copying: 1024/1024 [MB] (average 647 MBps) 00:27:28.365 00:27:28.365 10:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:28.365 10:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=74f941d41762230fe0fc55ff9eccf28a 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 74f941d41762230fe0fc55ff9eccf28a != \7\4\f\9\4\1\d\4\1\7\6\2\2\3\0\f\e\0\f\c\5\5\f\f\9\e\c\c\f\2\8\a ]] 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:30.904 Validate MD5 checksum, iteration 2 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:30.904 10:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:30.904 [2024-12-16 10:59:30.548152] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:30.904 [2024-12-16 10:59:30.548259] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92312 ] 00:27:30.904 [2024-12-16 10:59:30.679961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.904 [2024-12-16 10:59:30.710681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.288  [2024-12-16T10:59:32.849Z] Copying: 602/1024 [MB] (602 MBps) [2024-12-16T10:59:35.397Z] Copying: 1024/1024 [MB] (average 591 MBps) 00:27:35.408 00:27:35.408 10:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:35.408 10:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d322b80ca65eca0784b0918d435b8c3e 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d322b80ca65eca0784b0918d435b8c3e != \d\3\2\2\b\8\0\c\a\6\5\e\c\a\0\7\8\4\b\0\9\1\8\d\4\3\5\b\8\c\3\e ]] 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92182 ]] 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92182 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92390 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92390 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92390 ']' 00:27:37.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.955 10:59:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:37.956 [2024-12-16 10:59:37.522422] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:37.956 [2024-12-16 10:59:37.522662] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92390 ] 00:27:37.956 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92182 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:37.956 [2024-12-16 10:59:37.657914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.956 [2024-12-16 10:59:37.692155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.218 [2024-12-16 10:59:38.028792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:38.218 [2024-12-16 10:59:38.029177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:38.218 [2024-12-16 10:59:38.181851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.218 [2024-12-16 10:59:38.182075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:38.218 [2024-12-16 10:59:38.182225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:38.218 [2024-12-16 10:59:38.182256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.218 [2024-12-16 10:59:38.182345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.218 [2024-12-16 10:59:38.182433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:38.218 [2024-12-16 10:59:38.182460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:38.218 [2024-12-16 10:59:38.182481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.218 [2024-12-16 10:59:38.182830] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:38.218 [2024-12-16 10:59:38.183136] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:38.218 [2024-12-16 10:59:38.183161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.218 [2024-12-16 10:59:38.183172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:38.218 [2024-12-16 10:59:38.183186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.339 ms 00:27:38.218 [2024-12-16 10:59:38.183195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.218 [2024-12-16 10:59:38.183475] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:38.218 [2024-12-16 10:59:38.189410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.218 [2024-12-16 10:59:38.189462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:38.218 [2024-12-16 10:59:38.189475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.934 ms 00:27:38.218 [2024-12-16 10:59:38.189490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.218 [2024-12-16 10:59:38.190919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.218 [2024-12-16 10:59:38.190979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:38.218 [2024-12-16 10:59:38.190990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:38.218 [2024-12-16 10:59:38.191002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.218 [2024-12-16 10:59:38.191315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.218 [2024-12-16 10:59:38.191334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:38.218 [2024-12-16 10:59:38.191348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.231 ms 00:27:38.218 [2024-12-16 10:59:38.191355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.218 [2024-12-16 10:59:38.191393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.218 [2024-12-16 10:59:38.191401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:38.219 [2024-12-16 10:59:38.191409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:38.219 [2024-12-16 10:59:38.191416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.219 [2024-12-16 10:59:38.191442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.219 [2024-12-16 10:59:38.191451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:38.219 [2024-12-16 10:59:38.191459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:38.219 [2024-12-16 10:59:38.191469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.219 [2024-12-16 10:59:38.191498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:38.219 [2024-12-16 10:59:38.192698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.219 [2024-12-16 10:59:38.192889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:38.219 [2024-12-16 10:59:38.192907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.205 ms 00:27:38.219 [2024-12-16 10:59:38.192915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.219 [2024-12-16 10:59:38.192977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.219 [2024-12-16 10:59:38.192987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:38.219 [2024-12-16 10:59:38.192996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:38.219 [2024-12-16 10:59:38.193009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.219 [2024-12-16 10:59:38.193033] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:38.219 [2024-12-16 10:59:38.193055] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:38.219 [2024-12-16 10:59:38.193091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:38.219 [2024-12-16 10:59:38.193106] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:38.219 [2024-12-16 10:59:38.193214] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:38.219 [2024-12-16 10:59:38.193225] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:38.219 [2024-12-16 10:59:38.193238] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:38.219 [2024-12-16 10:59:38.193249] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193259] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193267] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:38.219 [2024-12-16 10:59:38.193275] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:38.219 [2024-12-16 10:59:38.193282] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:38.219 [2024-12-16 10:59:38.193289] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:38.219 [2024-12-16 10:59:38.193303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.219 [2024-12-16 10:59:38.193310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:38.219 [2024-12-16 10:59:38.193317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:27:38.219 [2024-12-16 10:59:38.193329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.219 [2024-12-16 10:59:38.193416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.219 [2024-12-16 10:59:38.193426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:38.219 [2024-12-16 10:59:38.193435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:38.219 [2024-12-16 10:59:38.193445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.219 [2024-12-16 10:59:38.193550] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:38.219 [2024-12-16 10:59:38.193562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:38.219 [2024-12-16 10:59:38.193571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:38.219 [2024-12-16 10:59:38.193600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:38.219 [2024-12-16 10:59:38.193618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:38.219 [2024-12-16 10:59:38.193627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:38.219 [2024-12-16 10:59:38.193634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:38.219 [2024-12-16 10:59:38.193651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:38.219 [2024-12-16 10:59:38.193659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:38.219 [2024-12-16 10:59:38.193675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:38.219 [2024-12-16 10:59:38.193687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:38.219 [2024-12-16 10:59:38.193703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:38.219 [2024-12-16 10:59:38.193711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:38.219 [2024-12-16 10:59:38.193727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:38.219 [2024-12-16 10:59:38.193734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:38.219 [2024-12-16 10:59:38.193750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:38.219 [2024-12-16 10:59:38.193758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:38.219 [2024-12-16 10:59:38.193773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:38.219 [2024-12-16 10:59:38.193781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:38.219 [2024-12-16 10:59:38.193802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:38.219 [2024-12-16 10:59:38.193813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:38.219 [2024-12-16 10:59:38.193830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:38.219 [2024-12-16 10:59:38.193837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:38.219 [2024-12-16 10:59:38.193851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:38.219 [2024-12-16 10:59:38.193871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:38.219 [2024-12-16 10:59:38.193891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:38.219 [2024-12-16 10:59:38.193897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193903] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:38.219 [2024-12-16 10:59:38.193912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:38.219 [2024-12-16 10:59:38.193923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:38.219 [2024-12-16 10:59:38.193946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.219 [2024-12-16 10:59:38.193957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:38.219 [2024-12-16 10:59:38.193964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:38.219 [2024-12-16 10:59:38.193971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:38.219 [2024-12-16 10:59:38.193977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:38.219 [2024-12-16 10:59:38.193984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:38.219 [2024-12-16 10:59:38.193992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:38.219 [2024-12-16 10:59:38.194000] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:38.219 [2024-12-16 10:59:38.194014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:38.219 [2024-12-16 10:59:38.194031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:38.219 [2024-12-16 10:59:38.194054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:38.219 [2024-12-16 10:59:38.194062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:38.219 [2024-12-16 10:59:38.194071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:38.219 [2024-12-16 10:59:38.194082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:38.219 [2024-12-16 10:59:38.194129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:38.220 [2024-12-16 10:59:38.194136] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:38.220 [2024-12-16 10:59:38.194144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.220 [2024-12-16 10:59:38.194153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:38.220 [2024-12-16 10:59:38.194161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:38.220 [2024-12-16 10:59:38.194174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:38.220 [2024-12-16 10:59:38.194181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:38.220 [2024-12-16 10:59:38.194189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.220 [2024-12-16 10:59:38.194197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:38.220 [2024-12-16 10:59:38.194205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.708 ms 00:27:38.220 [2024-12-16 10:59:38.194212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.220 [2024-12-16 10:59:38.204222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.220 [2024-12-16 10:59:38.204255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:38.220 [2024-12-16 10:59:38.204265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.956 ms 00:27:38.220 [2024-12-16 10:59:38.204272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.220 [2024-12-16 10:59:38.204306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.220 [2024-12-16 10:59:38.204321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:38.220 [2024-12-16 10:59:38.204329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:38.220 [2024-12-16 10:59:38.204336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.221352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.221395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:38.482 [2024-12-16 10:59:38.221406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.963 ms 00:27:38.482 [2024-12-16 10:59:38.221414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.221449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.221458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:38.482 [2024-12-16 10:59:38.221466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:38.482 [2024-12-16 10:59:38.221474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.221576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.221590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:38.482 [2024-12-16 10:59:38.221600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:38.482 [2024-12-16 10:59:38.221607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.221648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.221662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:38.482 [2024-12-16 10:59:38.221670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:38.482 [2024-12-16 10:59:38.221677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.227896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.227956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:38.482 [2024-12-16 10:59:38.227970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.198 ms 00:27:38.482 [2024-12-16 10:59:38.227980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.228083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.228096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:38.482 [2024-12-16 10:59:38.228126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:38.482 [2024-12-16 10:59:38.228135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.232899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.232966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:38.482 [2024-12-16 10:59:38.232979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.739 ms 00:27:38.482 [2024-12-16 10:59:38.232989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.234482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.234527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:38.482 [2024-12-16 10:59:38.234540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.351 ms 00:27:38.482 [2024-12-16 10:59:38.234550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.250207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.250253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:38.482 [2024-12-16 10:59:38.250264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.606 ms 00:27:38.482 [2024-12-16 10:59:38.250274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.250389] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:38.482 [2024-12-16 10:59:38.250470] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:38.482 [2024-12-16 10:59:38.250547] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:38.482 [2024-12-16 10:59:38.250625] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:38.482 [2024-12-16 10:59:38.250633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.250641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:38.482 [2024-12-16 10:59:38.250649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:27:38.482 [2024-12-16 10:59:38.250656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.250690] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:38.482 [2024-12-16 10:59:38.250703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.250711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:38.482 [2024-12-16 10:59:38.250719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:38.482 [2024-12-16 10:59:38.250726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.253884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.253917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:38.482 [2024-12-16 10:59:38.253942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.138 ms 00:27:38.482 [2024-12-16 10:59:38.253950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.254575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.254690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:38.482 [2024-12-16 10:59:38.254705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:38.482 [2024-12-16 10:59:38.254712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.482 [2024-12-16 10:59:38.254793] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:38.482 [2024-12-16 10:59:38.254963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.482 [2024-12-16 10:59:38.254975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:38.482 [2024-12-16 10:59:38.254984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:27:38.482 [2024-12-16 10:59:38.254996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.055 [2024-12-16 10:59:38.854589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.055 [2024-12-16 10:59:38.854681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:39.055 [2024-12-16 10:59:38.854700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 599.285 ms 00:27:39.055 [2024-12-16 10:59:38.854728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.055 [2024-12-16 10:59:38.856970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.055 [2024-12-16 10:59:38.857235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:39.055 [2024-12-16 10:59:38.857256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.705 ms 00:27:39.055 [2024-12-16 10:59:38.857265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.055 [2024-12-16 10:59:38.857681] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:39.055 [2024-12-16 10:59:38.857722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.055 [2024-12-16 10:59:38.857733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:39.055 [2024-12-16 10:59:38.857744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:27:39.055 [2024-12-16 10:59:38.857752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.055 [2024-12-16 10:59:38.857787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.055 [2024-12-16 10:59:38.857806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:39.055 [2024-12-16 10:59:38.857815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:39.055 [2024-12-16 10:59:38.857824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.055 [2024-12-16 10:59:38.857884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 603.085 ms, result 0 00:27:39.055 [2024-12-16 10:59:38.857960] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:39.055 [2024-12-16 10:59:38.858053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.055 [2024-12-16 10:59:38.858066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:39.055 [2024-12-16 10:59:38.858076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:27:39.055 [2024-12-16 10:59:38.858084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.443144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.443210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:39.626 [2024-12-16 10:59:39.443224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 584.508 ms 00:27:39.626 [2024-12-16 10:59:39.443233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.444603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.444739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:39.626 [2024-12-16 10:59:39.444755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:27:39.626 [2024-12-16 10:59:39.444762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.445136] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:39.626 [2024-12-16 10:59:39.445156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.445164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:39.626 [2024-12-16 10:59:39.445173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:27:39.626 [2024-12-16 10:59:39.445180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.445210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.445218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:39.626 [2024-12-16 10:59:39.445226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:39.626 [2024-12-16 10:59:39.445233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.445267] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 587.331 ms, result 0 00:27:39.626 [2024-12-16 10:59:39.445308] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:39.626 [2024-12-16 10:59:39.445326] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:39.626 [2024-12-16 10:59:39.445338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.445346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:39.626 [2024-12-16 10:59:39.445354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1190.559 ms 00:27:39.626 [2024-12-16 10:59:39.445361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.445390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.445399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:39.626 [2024-12-16 10:59:39.445410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:39.626 [2024-12-16 10:59:39.445421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.453185] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:39.626 [2024-12-16 10:59:39.453277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.453287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:39.626 [2024-12-16 10:59:39.453301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.841 ms 00:27:39.626 [2024-12-16 10:59:39.453309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.453990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.454006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:39.626 [2024-12-16 10:59:39.454016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.617 ms 00:27:39.626 [2024-12-16 10:59:39.454023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.456270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.456289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:39.626 [2024-12-16 10:59:39.456298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.231 ms 00:27:39.626 [2024-12-16 10:59:39.456306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.456345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.456353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:39.626 [2024-12-16 10:59:39.456360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:39.626 [2024-12-16 10:59:39.456367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.456469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.456485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:39.626 [2024-12-16 10:59:39.456492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:39.626 [2024-12-16 10:59:39.456499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.456522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.456529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:39.626 [2024-12-16 10:59:39.456537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:39.626 [2024-12-16 10:59:39.456544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.456570] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:39.626 [2024-12-16 10:59:39.456578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.456589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:39.626 [2024-12-16 10:59:39.456597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:39.626 [2024-12-16 10:59:39.456604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.456655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.626 [2024-12-16 10:59:39.456665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:39.626 [2024-12-16 10:59:39.456673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:39.626 [2024-12-16 10:59:39.456684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.626 [2024-12-16 10:59:39.457520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1275.292 ms, result 0 00:27:39.626 [2024-12-16 10:59:39.469873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.626 [2024-12-16 10:59:39.485856] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:39.626 [2024-12-16 10:59:39.493973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:40.196 Validate MD5 checksum, iteration 1 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:40.196 10:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:40.196 [2024-12-16 10:59:40.053357] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:40.196 [2024-12-16 10:59:40.054020] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92420 ] 00:27:40.454 [2024-12-16 10:59:40.193743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.454 [2024-12-16 10:59:40.225433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.834  [2024-12-16T10:59:42.083Z] Copying: 634/1024 [MB] (634 MBps) [2024-12-16T10:59:42.721Z] Copying: 1024/1024 [MB] (average 683 MBps) 00:27:42.732 00:27:42.732 10:59:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:42.733 10:59:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:45.279 Validate MD5 checksum, iteration 2 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=74f941d41762230fe0fc55ff9eccf28a 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 74f941d41762230fe0fc55ff9eccf28a != \7\4\f\9\4\1\d\4\1\7\6\2\2\3\0\f\e\0\f\c\5\5\f\f\9\e\c\c\f\2\8\a ]] 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:45.279 10:59:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:45.279 [2024-12-16 10:59:44.801104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:45.279 [2024-12-16 10:59:44.801193] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92478 ] 00:27:45.279 [2024-12-16 10:59:44.931341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.279 [2024-12-16 10:59:44.960327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.652  [2024-12-16T10:59:46.641Z] Copying: 719/1024 [MB] (719 MBps) [2024-12-16T10:59:47.209Z] Copying: 1024/1024 [MB] (average 748 MBps) 00:27:47.220 00:27:47.220 10:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:47.220 10:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d322b80ca65eca0784b0918d435b8c3e 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d322b80ca65eca0784b0918d435b8c3e != \d\3\2\2\b\8\0\c\a\6\5\e\c\a\0\7\8\4\b\0\9\1\8\d\4\3\5\b\8\c\3\e ]] 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:49.764 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92390 ]] 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92390 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92390 ']' 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92390 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92390 00:27:49.765 killing process with pid 92390 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92390' 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92390 00:27:49.765 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92390 00:27:49.765 [2024-12-16 10:59:49.516293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:49.765 [2024-12-16 10:59:49.522292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.522327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:49.765 [2024-12-16 10:59:49.522338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:49.765 [2024-12-16 10:59:49.522345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.522363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:49.765 [2024-12-16 10:59:49.522881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.522899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:49.765 [2024-12-16 10:59:49.522907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:27:49.765 [2024-12-16 10:59:49.522915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.523115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.523126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:49.765 [2024-12-16 10:59:49.523134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.167 ms 00:27:49.765 [2024-12-16 10:59:49.523145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.524668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.524695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:49.765 [2024-12-16 10:59:49.524703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.509 ms 00:27:49.765 [2024-12-16 10:59:49.524709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.525597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.525619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:49.765 [2024-12-16 10:59:49.525628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.852 ms 00:27:49.765 [2024-12-16 10:59:49.525635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.528300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.528446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:49.765 [2024-12-16 10:59:49.528459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.635 ms 00:27:49.765 [2024-12-16 10:59:49.528465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.529877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.529912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:49.765 [2024-12-16 10:59:49.529920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.385 ms 00:27:49.765 [2024-12-16 10:59:49.529938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.530004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.530016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:49.765 [2024-12-16 10:59:49.530023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:49.765 [2024-12-16 10:59:49.530030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.531998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.532023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:49.765 [2024-12-16 10:59:49.532030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.954 ms 00:27:49.765 [2024-12-16 10:59:49.532037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.533709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.533734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:49.765 [2024-12-16 10:59:49.533742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.646 ms 00:27:49.765 [2024-12-16 10:59:49.533748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.535094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.535191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:49.765 [2024-12-16 10:59:49.535203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.313 ms 00:27:49.765 [2024-12-16 10:59:49.535209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.536331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.536360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:49.765 [2024-12-16 10:59:49.536367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.073 ms 00:27:49.765 [2024-12-16 10:59:49.536372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.536399] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:49.765 [2024-12-16 10:59:49.536412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:49.765 [2024-12-16 10:59:49.536425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:49.765 [2024-12-16 10:59:49.536432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:49.765 [2024-12-16 10:59:49.536439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:49.765 [2024-12-16 10:59:49.536530] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:49.765 [2024-12-16 10:59:49.536536] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9711f87b-2f3a-4e62-9cc7-2dfc884de907 00:27:49.765 [2024-12-16 10:59:49.536543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:49.765 [2024-12-16 10:59:49.536549] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:49.765 [2024-12-16 10:59:49.536558] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:49.765 [2024-12-16 10:59:49.536564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:49.765 [2024-12-16 10:59:49.536569] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:49.765 [2024-12-16 10:59:49.536575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:49.765 [2024-12-16 10:59:49.536582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:49.765 [2024-12-16 10:59:49.536587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:49.765 [2024-12-16 10:59:49.536592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:49.765 [2024-12-16 10:59:49.536599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.536605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:49.765 [2024-12-16 10:59:49.536612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:27:49.765 [2024-12-16 10:59:49.536621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.538316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.538415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:49.765 [2024-12-16 10:59:49.538426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.682 ms 00:27:49.765 [2024-12-16 10:59:49.538432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.538523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.765 [2024-12-16 10:59:49.538529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:49.765 [2024-12-16 10:59:49.538541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:27:49.765 [2024-12-16 10:59:49.538547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.765 [2024-12-16 10:59:49.544594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.544621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:49.766 [2024-12-16 10:59:49.544634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.544640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.544665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.544672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:49.766 [2024-12-16 10:59:49.544680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.544686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.544741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.544749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:49.766 [2024-12-16 10:59:49.544755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.544761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.544776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.544783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:49.766 [2024-12-16 10:59:49.544789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.544797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.555557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.555694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:49.766 [2024-12-16 10:59:49.555707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.555713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.564277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.564307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:49.766 [2024-12-16 10:59:49.564324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.564331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.564391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.564399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:49.766 [2024-12-16 10:59:49.564406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.564412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.564440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.564448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:49.766 [2024-12-16 10:59:49.564458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.564464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.564524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.564535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:49.766 [2024-12-16 10:59:49.564541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.564547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.564578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.564585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:49.766 [2024-12-16 10:59:49.564592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.564597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.564633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.564641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:49.766 [2024-12-16 10:59:49.564648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.564654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.564695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.766 [2024-12-16 10:59:49.564702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:49.766 [2024-12-16 10:59:49.564708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.766 [2024-12-16 10:59:49.564732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.766 [2024-12-16 10:59:49.564848] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 42.529 ms, result 0 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:50.028 Remove shared memory files 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92182 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:50.028 00:27:50.028 real 1m14.235s 00:27:50.028 user 1m39.241s 00:27:50.028 sys 0m19.345s 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:50.028 10:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.028 ************************************ 00:27:50.028 END TEST ftl_upgrade_shutdown 00:27:50.028 ************************************ 00:27:50.028 10:59:49 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:50.028 10:59:49 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:50.028 10:59:49 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:27:50.028 10:59:49 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:50.028 10:59:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:50.028 ************************************ 00:27:50.028 START TEST ftl_restore_fast 00:27:50.028 ************************************ 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:50.028 * Looking for test storage... 00:27:50.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.028 --rc genhtml_branch_coverage=1 00:27:50.028 --rc genhtml_function_coverage=1 00:27:50.028 --rc genhtml_legend=1 00:27:50.028 --rc geninfo_all_blocks=1 00:27:50.028 --rc geninfo_unexecuted_blocks=1 00:27:50.028 00:27:50.028 ' 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.028 --rc genhtml_branch_coverage=1 00:27:50.028 --rc genhtml_function_coverage=1 00:27:50.028 --rc genhtml_legend=1 00:27:50.028 --rc geninfo_all_blocks=1 00:27:50.028 --rc geninfo_unexecuted_blocks=1 00:27:50.028 00:27:50.028 ' 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.028 --rc genhtml_branch_coverage=1 00:27:50.028 --rc genhtml_function_coverage=1 00:27:50.028 --rc genhtml_legend=1 00:27:50.028 --rc geninfo_all_blocks=1 00:27:50.028 --rc geninfo_unexecuted_blocks=1 00:27:50.028 00:27:50.028 ' 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.028 --rc genhtml_branch_coverage=1 00:27:50.028 --rc genhtml_function_coverage=1 00:27:50.028 --rc genhtml_legend=1 00:27:50.028 --rc geninfo_all_blocks=1 00:27:50.028 --rc geninfo_unexecuted_blocks=1 00:27:50.028 00:27:50.028 ' 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:50.028 10:59:49 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:50.028 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.YhFWiWtl1v 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92606 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92606 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 92606 ']' 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:50.289 10:59:50 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:50.289 [2024-12-16 10:59:50.086036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:50.289 [2024-12-16 10:59:50.086311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92606 ] 00:27:50.289 [2024-12-16 10:59:50.219453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.289 [2024-12-16 10:59:50.260345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.230 10:59:50 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.230 10:59:50 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:27:51.230 10:59:50 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:51.230 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:51.230 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:51.230 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:51.230 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:51.230 10:59:50 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:51.230 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:51.230 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:51.230 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:51.230 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:51.230 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:51.230 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:51.230 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:51.230 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:51.490 { 00:27:51.490 "name": "nvme0n1", 00:27:51.490 "aliases": [ 00:27:51.490 "6cf366ff-5f3d-4f40-898c-331950d62a64" 00:27:51.490 ], 00:27:51.490 "product_name": "NVMe disk", 00:27:51.490 "block_size": 4096, 00:27:51.490 "num_blocks": 1310720, 00:27:51.490 "uuid": "6cf366ff-5f3d-4f40-898c-331950d62a64", 00:27:51.490 "numa_id": -1, 00:27:51.490 "assigned_rate_limits": { 00:27:51.490 "rw_ios_per_sec": 0, 00:27:51.490 "rw_mbytes_per_sec": 0, 00:27:51.490 "r_mbytes_per_sec": 0, 00:27:51.490 "w_mbytes_per_sec": 0 00:27:51.490 }, 00:27:51.490 "claimed": true, 00:27:51.490 "claim_type": "read_many_write_one", 00:27:51.490 "zoned": false, 00:27:51.490 "supported_io_types": { 00:27:51.490 "read": true, 00:27:51.490 "write": true, 00:27:51.490 "unmap": true, 00:27:51.490 "flush": true, 00:27:51.490 "reset": true, 00:27:51.490 "nvme_admin": true, 00:27:51.490 "nvme_io": true, 00:27:51.490 "nvme_io_md": false, 00:27:51.490 "write_zeroes": true, 00:27:51.490 "zcopy": false, 00:27:51.490 "get_zone_info": false, 00:27:51.490 "zone_management": false, 00:27:51.490 "zone_append": false, 00:27:51.490 "compare": true, 00:27:51.490 "compare_and_write": false, 00:27:51.490 "abort": true, 00:27:51.490 "seek_hole": false, 00:27:51.490 "seek_data": false, 00:27:51.490 "copy": true, 00:27:51.490 "nvme_iov_md": false 00:27:51.490 }, 00:27:51.490 "driver_specific": { 00:27:51.490 "nvme": [ 00:27:51.490 { 00:27:51.490 "pci_address": "0000:00:11.0", 00:27:51.490 "trid": { 00:27:51.490 "trtype": "PCIe", 00:27:51.490 "traddr": "0000:00:11.0" 00:27:51.490 }, 00:27:51.490 "ctrlr_data": { 00:27:51.490 "cntlid": 0, 00:27:51.490 "vendor_id": "0x1b36", 00:27:51.490 "model_number": "QEMU NVMe Ctrl", 00:27:51.490 "serial_number": "12341", 00:27:51.490 "firmware_revision": "8.0.0", 00:27:51.490 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:51.490 "oacs": { 00:27:51.490 "security": 0, 00:27:51.490 "format": 1, 00:27:51.490 "firmware": 0, 00:27:51.490 "ns_manage": 1 00:27:51.490 }, 00:27:51.490 "multi_ctrlr": false, 00:27:51.490 "ana_reporting": false 00:27:51.490 }, 00:27:51.490 "vs": { 00:27:51.490 "nvme_version": "1.4" 00:27:51.490 }, 00:27:51.490 "ns_data": { 00:27:51.490 "id": 1, 00:27:51.490 "can_share": false 00:27:51.490 } 00:27:51.490 } 00:27:51.490 ], 00:27:51.490 "mp_policy": "active_passive" 00:27:51.490 } 00:27:51.490 } 00:27:51.490 ]' 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:51.490 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:51.750 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=4618fd72-ae85-4ed8-8969-40493c30bbf0 00:27:51.750 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:51.750 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4618fd72-ae85-4ed8-8969-40493c30bbf0 00:27:52.010 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:52.010 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=60ed9392-52b5-4a8b-ab77-d108098e38ba 00:27:52.010 10:59:51 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 60ed9392-52b5-4a8b-ab77-d108098e38ba 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=5083aa1b-b439-454a-ba99-66abeac6578d 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5083aa1b-b439-454a-ba99-66abeac6578d 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=5083aa1b-b439-454a-ba99-66abeac6578d 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 5083aa1b-b439-454a-ba99-66abeac6578d 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=5083aa1b-b439-454a-ba99-66abeac6578d 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:52.271 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5083aa1b-b439-454a-ba99-66abeac6578d 00:27:52.531 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:52.531 { 00:27:52.531 "name": "5083aa1b-b439-454a-ba99-66abeac6578d", 00:27:52.531 "aliases": [ 00:27:52.531 "lvs/nvme0n1p0" 00:27:52.531 ], 00:27:52.531 "product_name": "Logical Volume", 00:27:52.531 "block_size": 4096, 00:27:52.531 "num_blocks": 26476544, 00:27:52.531 "uuid": "5083aa1b-b439-454a-ba99-66abeac6578d", 00:27:52.531 "assigned_rate_limits": { 00:27:52.531 "rw_ios_per_sec": 0, 00:27:52.531 "rw_mbytes_per_sec": 0, 00:27:52.531 "r_mbytes_per_sec": 0, 00:27:52.531 "w_mbytes_per_sec": 0 00:27:52.531 }, 00:27:52.531 "claimed": false, 00:27:52.531 "zoned": false, 00:27:52.531 "supported_io_types": { 00:27:52.531 "read": true, 00:27:52.531 "write": true, 00:27:52.531 "unmap": true, 00:27:52.531 "flush": false, 00:27:52.531 "reset": true, 00:27:52.531 "nvme_admin": false, 00:27:52.531 "nvme_io": false, 00:27:52.531 "nvme_io_md": false, 00:27:52.531 "write_zeroes": true, 00:27:52.531 "zcopy": false, 00:27:52.531 "get_zone_info": false, 00:27:52.531 "zone_management": false, 00:27:52.531 "zone_append": false, 00:27:52.531 "compare": false, 00:27:52.531 "compare_and_write": false, 00:27:52.531 "abort": false, 00:27:52.531 "seek_hole": true, 00:27:52.531 "seek_data": true, 00:27:52.531 "copy": false, 00:27:52.531 "nvme_iov_md": false 00:27:52.531 }, 00:27:52.531 "driver_specific": { 00:27:52.531 "lvol": { 00:27:52.531 "lvol_store_uuid": "60ed9392-52b5-4a8b-ab77-d108098e38ba", 00:27:52.531 "base_bdev": "nvme0n1", 00:27:52.531 "thin_provision": true, 00:27:52.531 "num_allocated_clusters": 0, 00:27:52.531 "snapshot": false, 00:27:52.531 "clone": false, 00:27:52.531 "esnap_clone": false 00:27:52.532 } 00:27:52.532 } 00:27:52.532 } 00:27:52.532 ]' 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:52.532 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:52.791 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:52.791 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:52.791 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 5083aa1b-b439-454a-ba99-66abeac6578d 00:27:52.791 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=5083aa1b-b439-454a-ba99-66abeac6578d 00:27:52.791 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:52.792 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:52.792 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:52.792 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5083aa1b-b439-454a-ba99-66abeac6578d 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:53.051 { 00:27:53.051 "name": "5083aa1b-b439-454a-ba99-66abeac6578d", 00:27:53.051 "aliases": [ 00:27:53.051 "lvs/nvme0n1p0" 00:27:53.051 ], 00:27:53.051 "product_name": "Logical Volume", 00:27:53.051 "block_size": 4096, 00:27:53.051 "num_blocks": 26476544, 00:27:53.051 "uuid": "5083aa1b-b439-454a-ba99-66abeac6578d", 00:27:53.051 "assigned_rate_limits": { 00:27:53.051 "rw_ios_per_sec": 0, 00:27:53.051 "rw_mbytes_per_sec": 0, 00:27:53.051 "r_mbytes_per_sec": 0, 00:27:53.051 "w_mbytes_per_sec": 0 00:27:53.051 }, 00:27:53.051 "claimed": false, 00:27:53.051 "zoned": false, 00:27:53.051 "supported_io_types": { 00:27:53.051 "read": true, 00:27:53.051 "write": true, 00:27:53.051 "unmap": true, 00:27:53.051 "flush": false, 00:27:53.051 "reset": true, 00:27:53.051 "nvme_admin": false, 00:27:53.051 "nvme_io": false, 00:27:53.051 "nvme_io_md": false, 00:27:53.051 "write_zeroes": true, 00:27:53.051 "zcopy": false, 00:27:53.051 "get_zone_info": false, 00:27:53.051 "zone_management": false, 00:27:53.051 "zone_append": false, 00:27:53.051 "compare": false, 00:27:53.051 "compare_and_write": false, 00:27:53.051 "abort": false, 00:27:53.051 "seek_hole": true, 00:27:53.051 "seek_data": true, 00:27:53.051 "copy": false, 00:27:53.051 "nvme_iov_md": false 00:27:53.051 }, 00:27:53.051 "driver_specific": { 00:27:53.051 "lvol": { 00:27:53.051 "lvol_store_uuid": "60ed9392-52b5-4a8b-ab77-d108098e38ba", 00:27:53.051 "base_bdev": "nvme0n1", 00:27:53.051 "thin_provision": true, 00:27:53.051 "num_allocated_clusters": 0, 00:27:53.051 "snapshot": false, 00:27:53.051 "clone": false, 00:27:53.051 "esnap_clone": false 00:27:53.051 } 00:27:53.051 } 00:27:53.051 } 00:27:53.051 ]' 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:53.051 10:59:52 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:53.311 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:53.311 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 5083aa1b-b439-454a-ba99-66abeac6578d 00:27:53.311 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=5083aa1b-b439-454a-ba99-66abeac6578d 00:27:53.311 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:53.311 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:53.311 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:53.311 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5083aa1b-b439-454a-ba99-66abeac6578d 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:53.571 { 00:27:53.571 "name": "5083aa1b-b439-454a-ba99-66abeac6578d", 00:27:53.571 "aliases": [ 00:27:53.571 "lvs/nvme0n1p0" 00:27:53.571 ], 00:27:53.571 "product_name": "Logical Volume", 00:27:53.571 "block_size": 4096, 00:27:53.571 "num_blocks": 26476544, 00:27:53.571 "uuid": "5083aa1b-b439-454a-ba99-66abeac6578d", 00:27:53.571 "assigned_rate_limits": { 00:27:53.571 "rw_ios_per_sec": 0, 00:27:53.571 "rw_mbytes_per_sec": 0, 00:27:53.571 "r_mbytes_per_sec": 0, 00:27:53.571 "w_mbytes_per_sec": 0 00:27:53.571 }, 00:27:53.571 "claimed": false, 00:27:53.571 "zoned": false, 00:27:53.571 "supported_io_types": { 00:27:53.571 "read": true, 00:27:53.571 "write": true, 00:27:53.571 "unmap": true, 00:27:53.571 "flush": false, 00:27:53.571 "reset": true, 00:27:53.571 "nvme_admin": false, 00:27:53.571 "nvme_io": false, 00:27:53.571 "nvme_io_md": false, 00:27:53.571 "write_zeroes": true, 00:27:53.571 "zcopy": false, 00:27:53.571 "get_zone_info": false, 00:27:53.571 "zone_management": false, 00:27:53.571 "zone_append": false, 00:27:53.571 "compare": false, 00:27:53.571 "compare_and_write": false, 00:27:53.571 "abort": false, 00:27:53.571 "seek_hole": true, 00:27:53.571 "seek_data": true, 00:27:53.571 "copy": false, 00:27:53.571 "nvme_iov_md": false 00:27:53.571 }, 00:27:53.571 "driver_specific": { 00:27:53.571 "lvol": { 00:27:53.571 "lvol_store_uuid": "60ed9392-52b5-4a8b-ab77-d108098e38ba", 00:27:53.571 "base_bdev": "nvme0n1", 00:27:53.571 "thin_provision": true, 00:27:53.571 "num_allocated_clusters": 0, 00:27:53.571 "snapshot": false, 00:27:53.571 "clone": false, 00:27:53.571 "esnap_clone": false 00:27:53.571 } 00:27:53.571 } 00:27:53.571 } 00:27:53.571 ]' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5083aa1b-b439-454a-ba99-66abeac6578d --l2p_dram_limit 10' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:53.571 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5083aa1b-b439-454a-ba99-66abeac6578d --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:53.832 [2024-12-16 10:59:53.637834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.832 [2024-12-16 10:59:53.638001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:53.832 [2024-12-16 10:59:53.638019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:53.832 [2024-12-16 10:59:53.638029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.832 [2024-12-16 10:59:53.638079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.832 [2024-12-16 10:59:53.638090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:53.832 [2024-12-16 10:59:53.638097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:53.832 [2024-12-16 10:59:53.638107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.832 [2024-12-16 10:59:53.638125] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:53.832 [2024-12-16 10:59:53.638316] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:53.832 [2024-12-16 10:59:53.638332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.832 [2024-12-16 10:59:53.638342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:53.832 [2024-12-16 10:59:53.638351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:27:53.832 [2024-12-16 10:59:53.638359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.832 [2024-12-16 10:59:53.638505] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a72a5984-caaf-4053-9bd9-3891c43e1ef0 00:27:53.832 [2024-12-16 10:59:53.639786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.832 [2024-12-16 10:59:53.639811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:53.832 [2024-12-16 10:59:53.639821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:53.832 [2024-12-16 10:59:53.639828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.832 [2024-12-16 10:59:53.646870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.832 [2024-12-16 10:59:53.646897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:53.832 [2024-12-16 10:59:53.646907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:27:53.833 [2024-12-16 10:59:53.646913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.833 [2024-12-16 10:59:53.646987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.833 [2024-12-16 10:59:53.646995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:53.833 [2024-12-16 10:59:53.647004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:53.833 [2024-12-16 10:59:53.647014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.833 [2024-12-16 10:59:53.647055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.833 [2024-12-16 10:59:53.647063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:53.833 [2024-12-16 10:59:53.647077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:53.833 [2024-12-16 10:59:53.647083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.833 [2024-12-16 10:59:53.647101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:53.833 [2024-12-16 10:59:53.648746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.833 [2024-12-16 10:59:53.648773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:53.833 [2024-12-16 10:59:53.648783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:27:53.833 [2024-12-16 10:59:53.648790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.833 [2024-12-16 10:59:53.648817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.833 [2024-12-16 10:59:53.648825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:53.833 [2024-12-16 10:59:53.648831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:53.833 [2024-12-16 10:59:53.648841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.833 [2024-12-16 10:59:53.648853] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:53.833 [2024-12-16 10:59:53.648981] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:53.833 [2024-12-16 10:59:53.648992] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:53.833 [2024-12-16 10:59:53.649006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:53.833 [2024-12-16 10:59:53.649016] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:53.833 [2024-12-16 10:59:53.649045] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:53.833 [2024-12-16 10:59:53.649053] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:53.833 [2024-12-16 10:59:53.649062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:53.833 [2024-12-16 10:59:53.649070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.833 [2024-12-16 10:59:53.649077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:53.833 [2024-12-16 10:59:53.649083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:27:53.833 [2024-12-16 10:59:53.649093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.833 [2024-12-16 10:59:53.649158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.833 [2024-12-16 10:59:53.649168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:53.833 [2024-12-16 10:59:53.649174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:53.833 [2024-12-16 10:59:53.649181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.833 [2024-12-16 10:59:53.649255] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:53.833 [2024-12-16 10:59:53.649266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:53.833 [2024-12-16 10:59:53.649276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:53.833 [2024-12-16 10:59:53.649297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:53.833 [2024-12-16 10:59:53.649314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:53.833 [2024-12-16 10:59:53.649326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:53.833 [2024-12-16 10:59:53.649333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:53.833 [2024-12-16 10:59:53.649338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:53.833 [2024-12-16 10:59:53.649346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:53.833 [2024-12-16 10:59:53.649352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:53.833 [2024-12-16 10:59:53.649359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:53.833 [2024-12-16 10:59:53.649371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:53.833 [2024-12-16 10:59:53.649391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:53.833 [2024-12-16 10:59:53.649414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:53.833 [2024-12-16 10:59:53.649433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:53.833 [2024-12-16 10:59:53.649457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:53.833 [2024-12-16 10:59:53.649476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:53.833 [2024-12-16 10:59:53.649490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:53.833 [2024-12-16 10:59:53.649497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:53.833 [2024-12-16 10:59:53.649503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:53.833 [2024-12-16 10:59:53.649510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:53.833 [2024-12-16 10:59:53.649517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:53.833 [2024-12-16 10:59:53.649524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:53.833 [2024-12-16 10:59:53.649537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:53.833 [2024-12-16 10:59:53.649542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649549] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:53.833 [2024-12-16 10:59:53.649561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:53.833 [2024-12-16 10:59:53.649571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.833 [2024-12-16 10:59:53.649585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:53.833 [2024-12-16 10:59:53.649591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:53.833 [2024-12-16 10:59:53.649599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:53.833 [2024-12-16 10:59:53.649606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:53.833 [2024-12-16 10:59:53.649613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:53.833 [2024-12-16 10:59:53.649620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:53.833 [2024-12-16 10:59:53.649632] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:53.833 [2024-12-16 10:59:53.649641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:53.833 [2024-12-16 10:59:53.649651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:53.833 [2024-12-16 10:59:53.649658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:53.833 [2024-12-16 10:59:53.649666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:53.833 [2024-12-16 10:59:53.649673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:53.833 [2024-12-16 10:59:53.649681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:53.833 [2024-12-16 10:59:53.649688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:53.833 [2024-12-16 10:59:53.649698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:53.833 [2024-12-16 10:59:53.649704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:53.833 [2024-12-16 10:59:53.649712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:53.833 [2024-12-16 10:59:53.649718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:53.833 [2024-12-16 10:59:53.649727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:53.833 [2024-12-16 10:59:53.649734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:53.834 [2024-12-16 10:59:53.649742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:53.834 [2024-12-16 10:59:53.649749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:53.834 [2024-12-16 10:59:53.649757] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:53.834 [2024-12-16 10:59:53.649766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:53.834 [2024-12-16 10:59:53.649774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:53.834 [2024-12-16 10:59:53.649781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:53.834 [2024-12-16 10:59:53.649788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:53.834 [2024-12-16 10:59:53.649794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:53.834 [2024-12-16 10:59:53.649801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.834 [2024-12-16 10:59:53.649806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:53.834 [2024-12-16 10:59:53.649817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:27:53.834 [2024-12-16 10:59:53.649823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.834 [2024-12-16 10:59:53.649853] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:53.834 [2024-12-16 10:59:53.649868] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:58.043 [2024-12-16 10:59:57.315126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.315196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:58.043 [2024-12-16 10:59:57.315215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3665.250 ms 00:27:58.043 [2024-12-16 10:59:57.315223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.324897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.325090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.043 [2024-12-16 10:59:57.325114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.583 ms 00:27:58.043 [2024-12-16 10:59:57.325128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.325223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.325232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:58.043 [2024-12-16 10:59:57.325245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:58.043 [2024-12-16 10:59:57.325253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.334523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.334569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.043 [2024-12-16 10:59:57.334581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.200 ms 00:27:58.043 [2024-12-16 10:59:57.334589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.334621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.334628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.043 [2024-12-16 10:59:57.334641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:58.043 [2024-12-16 10:59:57.334648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.335067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.335088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.043 [2024-12-16 10:59:57.335100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:27:58.043 [2024-12-16 10:59:57.335107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.335225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.335237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.043 [2024-12-16 10:59:57.335248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:27:58.043 [2024-12-16 10:59:57.335259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.350910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.350966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.043 [2024-12-16 10:59:57.350981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.625 ms 00:27:58.043 [2024-12-16 10:59:57.350990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.360605] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:58.043 [2024-12-16 10:59:57.363953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.363998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:58.043 [2024-12-16 10:59:57.364009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.860 ms 00:27:58.043 [2024-12-16 10:59:57.364020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.437543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.437609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:58.043 [2024-12-16 10:59:57.437623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.492 ms 00:27:58.043 [2024-12-16 10:59:57.437638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.437841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.437855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:58.043 [2024-12-16 10:59:57.437864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:27:58.043 [2024-12-16 10:59:57.437875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.443632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.443815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:58.043 [2024-12-16 10:59:57.443836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.736 ms 00:27:58.043 [2024-12-16 10:59:57.443847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.448658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.448710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:58.043 [2024-12-16 10:59:57.448744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:27:58.043 [2024-12-16 10:59:57.448754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.449165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.043 [2024-12-16 10:59:57.449185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:58.043 [2024-12-16 10:59:57.449195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:27:58.043 [2024-12-16 10:59:57.449208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.043 [2024-12-16 10:59:57.486750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.486944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:58.044 [2024-12-16 10:59:57.486965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.499 ms 00:27:58.044 [2024-12-16 10:59:57.486981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.493518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.493676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:58.044 [2024-12-16 10:59:57.493695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.423 ms 00:27:58.044 [2024-12-16 10:59:57.493706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.499215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.499268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:58.044 [2024-12-16 10:59:57.499278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.469 ms 00:27:58.044 [2024-12-16 10:59:57.499288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.505652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.505711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:58.044 [2024-12-16 10:59:57.505722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.317 ms 00:27:58.044 [2024-12-16 10:59:57.505737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.505791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.505804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:58.044 [2024-12-16 10:59:57.505813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:58.044 [2024-12-16 10:59:57.505824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.505896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.505908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:58.044 [2024-12-16 10:59:57.505916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:58.044 [2024-12-16 10:59:57.505953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.507113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3868.747 ms, result 0 00:27:58.044 { 00:27:58.044 "name": "ftl0", 00:27:58.044 "uuid": "a72a5984-caaf-4053-9bd9-3891c43e1ef0" 00:27:58.044 } 00:27:58.044 10:59:57 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:58.044 10:59:57 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:58.044 10:59:57 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:58.044 10:59:57 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:58.044 [2024-12-16 10:59:57.956913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.956991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:58.044 [2024-12-16 10:59:57.957010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:58.044 [2024-12-16 10:59:57.957019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.957048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:58.044 [2024-12-16 10:59:57.957866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.957913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:58.044 [2024-12-16 10:59:57.957924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:27:58.044 [2024-12-16 10:59:57.957952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.958241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.958264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:58.044 [2024-12-16 10:59:57.958273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:27:58.044 [2024-12-16 10:59:57.958291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.961563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.961593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:58.044 [2024-12-16 10:59:57.961604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:27:58.044 [2024-12-16 10:59:57.961615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.967850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.967899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:58.044 [2024-12-16 10:59:57.967911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.216 ms 00:27:58.044 [2024-12-16 10:59:57.967921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.970901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.970976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:58.044 [2024-12-16 10:59:57.970988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:27:58.044 [2024-12-16 10:59:57.971002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.976696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.976784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:58.044 [2024-12-16 10:59:57.976797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.641 ms 00:27:58.044 [2024-12-16 10:59:57.976814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.976984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.977005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:58.044 [2024-12-16 10:59:57.977016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:27:58.044 [2024-12-16 10:59:57.977027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.980429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.980489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:58.044 [2024-12-16 10:59:57.980500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:27:58.044 [2024-12-16 10:59:57.980510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.983247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.983310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:58.044 [2024-12-16 10:59:57.983319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:27:58.044 [2024-12-16 10:59:57.983329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.985638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.985698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:58.044 [2024-12-16 10:59:57.985708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:27:58.044 [2024-12-16 10:59:57.985719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.987917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.044 [2024-12-16 10:59:57.987997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:58.044 [2024-12-16 10:59:57.988015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:27:58.044 [2024-12-16 10:59:57.988025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.044 [2024-12-16 10:59:57.988073] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:58.044 [2024-12-16 10:59:57.988093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:58.044 [2024-12-16 10:59:57.988304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.988985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.989001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.989019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.989029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.989037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.989048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.989056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:58.045 [2024-12-16 10:59:57.989078] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:58.045 [2024-12-16 10:59:57.989086] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a72a5984-caaf-4053-9bd9-3891c43e1ef0 00:27:58.045 [2024-12-16 10:59:57.989097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:58.045 [2024-12-16 10:59:57.989105] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:58.045 [2024-12-16 10:59:57.989115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:58.045 [2024-12-16 10:59:57.989123] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:58.045 [2024-12-16 10:59:57.989133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:58.045 [2024-12-16 10:59:57.989141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:58.045 [2024-12-16 10:59:57.989163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:58.045 [2024-12-16 10:59:57.989171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:58.045 [2024-12-16 10:59:57.989180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:58.045 [2024-12-16 10:59:57.989187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.045 [2024-12-16 10:59:57.989196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:58.045 [2024-12-16 10:59:57.989208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:27:58.045 [2024-12-16 10:59:57.989219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.045 [2024-12-16 10:59:57.991729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.045 [2024-12-16 10:59:57.991769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:58.045 [2024-12-16 10:59:57.991780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.469 ms 00:27:58.045 [2024-12-16 10:59:57.991790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:57.991961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.046 [2024-12-16 10:59:57.991975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:58.046 [2024-12-16 10:59:57.991984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:27:58.046 [2024-12-16 10:59:57.991994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.000605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.000821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.046 [2024-12-16 10:59:58.000903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.000997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.001094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.001123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.046 [2024-12-16 10:59:58.001222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.001250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.001356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.001402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.046 [2024-12-16 10:59:58.001424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.001495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.001529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.001613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.046 [2024-12-16 10:59:58.001669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.001700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.015458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.015648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.046 [2024-12-16 10:59:58.015809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.015834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.026986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.027170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.046 [2024-12-16 10:59:58.027228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.027259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.027360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.027392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.046 [2024-12-16 10:59:58.027413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.027435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.027523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.027668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.046 [2024-12-16 10:59:58.027689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.027714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.027815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.027850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.046 [2024-12-16 10:59:58.027872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.028004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.028087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.028157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:58.046 [2024-12-16 10:59:58.028184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.028321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.028383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.028411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.046 [2024-12-16 10:59:58.028476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.028502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.046 [2024-12-16 10:59:58.028566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.046 [2024-12-16 10:59:58.028626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.046 [2024-12-16 10:59:58.028651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.046 [2024-12-16 10:59:58.028676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.307 [2024-12-16 10:59:58.028884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.930 ms, result 0 00:27:58.307 true 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92606 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92606 ']' 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92606 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92606 00:27:58.307 killing process with pid 92606 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92606' 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 92606 00:27:58.307 10:59:58 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 92606 00:28:03.597 11:00:02 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:06.888 262144+0 records in 00:28:06.888 262144+0 records out 00:28:06.888 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.59175 s, 299 MB/s 00:28:06.888 11:00:06 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:08.799 11:00:08 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:08.799 [2024-12-16 11:00:08.615384] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:08.799 [2024-12-16 11:00:08.615475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92812 ] 00:28:08.799 [2024-12-16 11:00:08.746871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.799 [2024-12-16 11:00:08.778634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.060 [2024-12-16 11:00:08.867081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:09.060 [2024-12-16 11:00:08.867143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:09.060 [2024-12-16 11:00:09.023545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.060 [2024-12-16 11:00:09.023587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:09.060 [2024-12-16 11:00:09.023603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:09.060 [2024-12-16 11:00:09.023610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.060 [2024-12-16 11:00:09.023656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.023669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:09.061 [2024-12-16 11:00:09.023677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:09.061 [2024-12-16 11:00:09.023688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.023704] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:09.061 [2024-12-16 11:00:09.023956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:09.061 [2024-12-16 11:00:09.023972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.023979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:09.061 [2024-12-16 11:00:09.023991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:28:09.061 [2024-12-16 11:00:09.023998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.025098] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:09.061 [2024-12-16 11:00:09.027726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.027855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:09.061 [2024-12-16 11:00:09.027872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:28:09.061 [2024-12-16 11:00:09.027879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.027947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.027960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:09.061 [2024-12-16 11:00:09.027972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:09.061 [2024-12-16 11:00:09.027982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.032978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.033006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:09.061 [2024-12-16 11:00:09.033015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.933 ms 00:28:09.061 [2024-12-16 11:00:09.033026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.033106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.033114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:09.061 [2024-12-16 11:00:09.033125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:09.061 [2024-12-16 11:00:09.033132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.033167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.033178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:09.061 [2024-12-16 11:00:09.033186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:09.061 [2024-12-16 11:00:09.033193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.033213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:09.061 [2024-12-16 11:00:09.034553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.034578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:09.061 [2024-12-16 11:00:09.034587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:28:09.061 [2024-12-16 11:00:09.034598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.034625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.034633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:09.061 [2024-12-16 11:00:09.034640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:09.061 [2024-12-16 11:00:09.034647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.034665] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:09.061 [2024-12-16 11:00:09.034685] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:09.061 [2024-12-16 11:00:09.034724] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:09.061 [2024-12-16 11:00:09.034742] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:09.061 [2024-12-16 11:00:09.034842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:09.061 [2024-12-16 11:00:09.034852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:09.061 [2024-12-16 11:00:09.034861] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:09.061 [2024-12-16 11:00:09.034871] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:09.061 [2024-12-16 11:00:09.034882] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:09.061 [2024-12-16 11:00:09.034889] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:09.061 [2024-12-16 11:00:09.034896] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:09.061 [2024-12-16 11:00:09.034903] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:09.061 [2024-12-16 11:00:09.034910] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:09.061 [2024-12-16 11:00:09.034918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.034925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:09.061 [2024-12-16 11:00:09.034949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:28:09.061 [2024-12-16 11:00:09.034955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.035037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.061 [2024-12-16 11:00:09.035048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:09.061 [2024-12-16 11:00:09.035056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:09.061 [2024-12-16 11:00:09.035063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.061 [2024-12-16 11:00:09.035158] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:09.061 [2024-12-16 11:00:09.035167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:09.061 [2024-12-16 11:00:09.035175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:09.061 [2024-12-16 11:00:09.035183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:09.061 [2024-12-16 11:00:09.035196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:09.061 [2024-12-16 11:00:09.035212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:09.061 [2024-12-16 11:00:09.035220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:09.061 [2024-12-16 11:00:09.035237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:09.061 [2024-12-16 11:00:09.035245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:09.061 [2024-12-16 11:00:09.035253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:09.061 [2024-12-16 11:00:09.035262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:09.061 [2024-12-16 11:00:09.035270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:09.061 [2024-12-16 11:00:09.035278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:09.061 [2024-12-16 11:00:09.035293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:09.061 [2024-12-16 11:00:09.035300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:09.061 [2024-12-16 11:00:09.035316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.061 [2024-12-16 11:00:09.035331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:09.061 [2024-12-16 11:00:09.035338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.061 [2024-12-16 11:00:09.035353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:09.061 [2024-12-16 11:00:09.035361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.061 [2024-12-16 11:00:09.035375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:09.061 [2024-12-16 11:00:09.035386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.061 [2024-12-16 11:00:09.035401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:09.061 [2024-12-16 11:00:09.035408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:09.061 [2024-12-16 11:00:09.035416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:09.061 [2024-12-16 11:00:09.035424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:09.061 [2024-12-16 11:00:09.035431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:09.062 [2024-12-16 11:00:09.035439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:09.062 [2024-12-16 11:00:09.035447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:09.062 [2024-12-16 11:00:09.035455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:09.062 [2024-12-16 11:00:09.035462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.062 [2024-12-16 11:00:09.035470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:09.062 [2024-12-16 11:00:09.035477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:09.062 [2024-12-16 11:00:09.035485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.062 [2024-12-16 11:00:09.035492] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:09.062 [2024-12-16 11:00:09.035501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:09.062 [2024-12-16 11:00:09.035511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:09.062 [2024-12-16 11:00:09.035521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.062 [2024-12-16 11:00:09.035535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:09.062 [2024-12-16 11:00:09.035543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:09.062 [2024-12-16 11:00:09.035552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:09.062 [2024-12-16 11:00:09.035560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:09.062 [2024-12-16 11:00:09.035567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:09.062 [2024-12-16 11:00:09.035575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:09.062 [2024-12-16 11:00:09.035584] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:09.062 [2024-12-16 11:00:09.035594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:09.062 [2024-12-16 11:00:09.035604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:09.062 [2024-12-16 11:00:09.035612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:09.062 [2024-12-16 11:00:09.035620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:09.062 [2024-12-16 11:00:09.035628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:09.062 [2024-12-16 11:00:09.035635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:09.062 [2024-12-16 11:00:09.035642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:09.062 [2024-12-16 11:00:09.035650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:09.062 [2024-12-16 11:00:09.035657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:09.062 [2024-12-16 11:00:09.035664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:09.062 [2024-12-16 11:00:09.035675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:09.062 [2024-12-16 11:00:09.035682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:09.062 [2024-12-16 11:00:09.035689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:09.062 [2024-12-16 11:00:09.035696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:09.062 [2024-12-16 11:00:09.035705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:09.062 [2024-12-16 11:00:09.035712] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:09.062 [2024-12-16 11:00:09.035723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:09.062 [2024-12-16 11:00:09.035731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:09.062 [2024-12-16 11:00:09.035738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:09.062 [2024-12-16 11:00:09.035746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:09.062 [2024-12-16 11:00:09.035753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:09.062 [2024-12-16 11:00:09.035760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.062 [2024-12-16 11:00:09.035767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:09.062 [2024-12-16 11:00:09.035777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:28:09.062 [2024-12-16 11:00:09.035784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.056031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.056111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:09.324 [2024-12-16 11:00:09.056136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.205 ms 00:28:09.324 [2024-12-16 11:00:09.056156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.056331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.056349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:09.324 [2024-12-16 11:00:09.056365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:28:09.324 [2024-12-16 11:00:09.056380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.067111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.067238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:09.324 [2024-12-16 11:00:09.067253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.626 ms 00:28:09.324 [2024-12-16 11:00:09.067261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.067290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.067299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:09.324 [2024-12-16 11:00:09.067307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:09.324 [2024-12-16 11:00:09.067314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.067663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.067681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:09.324 [2024-12-16 11:00:09.067689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:28:09.324 [2024-12-16 11:00:09.067696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.067816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.067826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:09.324 [2024-12-16 11:00:09.067835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:28:09.324 [2024-12-16 11:00:09.067843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.072418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.072448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:09.324 [2024-12-16 11:00:09.072461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.554 ms 00:28:09.324 [2024-12-16 11:00:09.072468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.075088] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:09.324 [2024-12-16 11:00:09.075128] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:09.324 [2024-12-16 11:00:09.075142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.075152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:09.324 [2024-12-16 11:00:09.075161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:28:09.324 [2024-12-16 11:00:09.075167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.089573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.089603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:09.324 [2024-12-16 11:00:09.089619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.369 ms 00:28:09.324 [2024-12-16 11:00:09.089631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.091436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.091466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:09.324 [2024-12-16 11:00:09.091474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:28:09.324 [2024-12-16 11:00:09.091481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.093179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.093291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:09.324 [2024-12-16 11:00:09.093305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.665 ms 00:28:09.324 [2024-12-16 11:00:09.093312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.093612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.093627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:09.324 [2024-12-16 11:00:09.093635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:28:09.324 [2024-12-16 11:00:09.093642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.109561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.109605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:09.324 [2024-12-16 11:00:09.109622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.903 ms 00:28:09.324 [2024-12-16 11:00:09.109632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.117014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:09.324 [2024-12-16 11:00:09.119240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.119273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:09.324 [2024-12-16 11:00:09.119284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.569 ms 00:28:09.324 [2024-12-16 11:00:09.119293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.119344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.119355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:09.324 [2024-12-16 11:00:09.119367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:09.324 [2024-12-16 11:00:09.119375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.119462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.119474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:09.324 [2024-12-16 11:00:09.119483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:09.324 [2024-12-16 11:00:09.119491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.119520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.119536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:09.324 [2024-12-16 11:00:09.119544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:09.324 [2024-12-16 11:00:09.119551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.119579] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:09.324 [2024-12-16 11:00:09.119589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.119596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:09.324 [2024-12-16 11:00:09.119604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:09.324 [2024-12-16 11:00:09.119615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.123395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.123435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:09.324 [2024-12-16 11:00:09.123445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.760 ms 00:28:09.324 [2024-12-16 11:00:09.123452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.123516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.324 [2024-12-16 11:00:09.123525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:09.324 [2024-12-16 11:00:09.123532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:09.324 [2024-12-16 11:00:09.123543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.324 [2024-12-16 11:00:09.124554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 100.610 ms, result 0 00:28:10.263  [2024-12-16T11:00:11.196Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-16T11:00:12.139Z] Copying: 49/1024 [MB] (24 MBps) [2024-12-16T11:00:13.526Z] Copying: 67/1024 [MB] (18 MBps) [2024-12-16T11:00:14.471Z] Copying: 90/1024 [MB] (23 MBps) [2024-12-16T11:00:15.413Z] Copying: 111/1024 [MB] (21 MBps) [2024-12-16T11:00:16.354Z] Copying: 136/1024 [MB] (24 MBps) [2024-12-16T11:00:17.357Z] Copying: 158/1024 [MB] (21 MBps) [2024-12-16T11:00:18.302Z] Copying: 178/1024 [MB] (19 MBps) [2024-12-16T11:00:19.248Z] Copying: 199/1024 [MB] (21 MBps) [2024-12-16T11:00:20.192Z] Copying: 222/1024 [MB] (22 MBps) [2024-12-16T11:00:21.577Z] Copying: 246/1024 [MB] (23 MBps) [2024-12-16T11:00:22.148Z] Copying: 270/1024 [MB] (24 MBps) [2024-12-16T11:00:23.529Z] Copying: 291/1024 [MB] (21 MBps) [2024-12-16T11:00:24.467Z] Copying: 319/1024 [MB] (27 MBps) [2024-12-16T11:00:25.408Z] Copying: 356/1024 [MB] (36 MBps) [2024-12-16T11:00:26.352Z] Copying: 377/1024 [MB] (21 MBps) [2024-12-16T11:00:27.295Z] Copying: 391/1024 [MB] (13 MBps) [2024-12-16T11:00:28.239Z] Copying: 410/1024 [MB] (19 MBps) [2024-12-16T11:00:29.185Z] Copying: 426/1024 [MB] (15 MBps) [2024-12-16T11:00:30.571Z] Copying: 442/1024 [MB] (16 MBps) [2024-12-16T11:00:31.144Z] Copying: 459/1024 [MB] (17 MBps) [2024-12-16T11:00:32.531Z] Copying: 475/1024 [MB] (15 MBps) [2024-12-16T11:00:33.475Z] Copying: 493/1024 [MB] (17 MBps) [2024-12-16T11:00:34.419Z] Copying: 504/1024 [MB] (11 MBps) [2024-12-16T11:00:35.361Z] Copying: 520/1024 [MB] (15 MBps) [2024-12-16T11:00:36.304Z] Copying: 531/1024 [MB] (10 MBps) [2024-12-16T11:00:37.250Z] Copying: 546/1024 [MB] (14 MBps) [2024-12-16T11:00:38.194Z] Copying: 560/1024 [MB] (13 MBps) [2024-12-16T11:00:39.577Z] Copying: 578/1024 [MB] (18 MBps) [2024-12-16T11:00:40.148Z] Copying: 598/1024 [MB] (19 MBps) [2024-12-16T11:00:41.535Z] Copying: 617/1024 [MB] (19 MBps) [2024-12-16T11:00:42.479Z] Copying: 637/1024 [MB] (20 MBps) [2024-12-16T11:00:43.424Z] Copying: 658/1024 [MB] (20 MBps) [2024-12-16T11:00:44.368Z] Copying: 675/1024 [MB] (16 MBps) [2024-12-16T11:00:45.314Z] Copying: 694/1024 [MB] (18 MBps) [2024-12-16T11:00:46.320Z] Copying: 714/1024 [MB] (20 MBps) [2024-12-16T11:00:47.263Z] Copying: 732/1024 [MB] (17 MBps) [2024-12-16T11:00:48.204Z] Copying: 748/1024 [MB] (16 MBps) [2024-12-16T11:00:49.146Z] Copying: 772/1024 [MB] (23 MBps) [2024-12-16T11:00:50.528Z] Copying: 790/1024 [MB] (17 MBps) [2024-12-16T11:00:51.469Z] Copying: 812/1024 [MB] (22 MBps) [2024-12-16T11:00:52.414Z] Copying: 834/1024 [MB] (21 MBps) [2024-12-16T11:00:53.357Z] Copying: 845/1024 [MB] (11 MBps) [2024-12-16T11:00:54.300Z] Copying: 857/1024 [MB] (11 MBps) [2024-12-16T11:00:55.247Z] Copying: 872/1024 [MB] (15 MBps) [2024-12-16T11:00:56.196Z] Copying: 903624/1048576 [kB] (9976 kBps) [2024-12-16T11:00:57.140Z] Copying: 913600/1048576 [kB] (9976 kBps) [2024-12-16T11:00:58.528Z] Copying: 923552/1048576 [kB] (9952 kBps) [2024-12-16T11:00:59.470Z] Copying: 911/1024 [MB] (10 MBps) [2024-12-16T11:01:00.417Z] Copying: 922/1024 [MB] (10 MBps) [2024-12-16T11:01:01.361Z] Copying: 932/1024 [MB] (10 MBps) [2024-12-16T11:01:02.305Z] Copying: 942/1024 [MB] (10 MBps) [2024-12-16T11:01:03.246Z] Copying: 954/1024 [MB] (11 MBps) [2024-12-16T11:01:04.191Z] Copying: 975/1024 [MB] (21 MBps) [2024-12-16T11:01:05.580Z] Copying: 1009264/1048576 [kB] (10024 kBps) [2024-12-16T11:01:06.152Z] Copying: 997/1024 [MB] (12 MBps) [2024-12-16T11:01:07.542Z] Copying: 1031884/1048576 [kB] (10060 kBps) [2024-12-16T11:01:07.542Z] Copying: 1023/1024 [MB] (15 MBps) [2024-12-16T11:01:07.542Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 11:01:07.172240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.553 [2024-12-16 11:01:07.172303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:07.553 [2024-12-16 11:01:07.172327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:07.553 [2024-12-16 11:01:07.172336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.553 [2024-12-16 11:01:07.172357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:07.553 [2024-12-16 11:01:07.173197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.553 [2024-12-16 11:01:07.173229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:07.553 [2024-12-16 11:01:07.173241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:29:07.553 [2024-12-16 11:01:07.173250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.553 [2024-12-16 11:01:07.176232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.553 [2024-12-16 11:01:07.176290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:07.553 [2024-12-16 11:01:07.176301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.953 ms 00:29:07.553 [2024-12-16 11:01:07.176309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.553 [2024-12-16 11:01:07.176342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.553 [2024-12-16 11:01:07.176359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:07.553 [2024-12-16 11:01:07.176368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:07.553 [2024-12-16 11:01:07.176375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.553 [2024-12-16 11:01:07.176430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.553 [2024-12-16 11:01:07.176440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:07.553 [2024-12-16 11:01:07.176448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:07.553 [2024-12-16 11:01:07.176456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.553 [2024-12-16 11:01:07.176475] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:07.553 [2024-12-16 11:01:07.176496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:07.553 [2024-12-16 11:01:07.176734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.176998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:07.554 [2024-12-16 11:01:07.177317] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:07.554 [2024-12-16 11:01:07.177325] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a72a5984-caaf-4053-9bd9-3891c43e1ef0 00:29:07.554 [2024-12-16 11:01:07.177336] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:07.554 [2024-12-16 11:01:07.177343] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:07.554 [2024-12-16 11:01:07.177351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:07.554 [2024-12-16 11:01:07.177358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:07.554 [2024-12-16 11:01:07.177365] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:07.554 [2024-12-16 11:01:07.177374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:07.554 [2024-12-16 11:01:07.177382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:07.554 [2024-12-16 11:01:07.177388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:07.554 [2024-12-16 11:01:07.177394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:07.554 [2024-12-16 11:01:07.177401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.554 [2024-12-16 11:01:07.177409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:07.554 [2024-12-16 11:01:07.177417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:29:07.554 [2024-12-16 11:01:07.177427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.554 [2024-12-16 11:01:07.179771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.554 [2024-12-16 11:01:07.179806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:07.554 [2024-12-16 11:01:07.179816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.323 ms 00:29:07.554 [2024-12-16 11:01:07.179825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.554 [2024-12-16 11:01:07.179981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.554 [2024-12-16 11:01:07.179997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:07.554 [2024-12-16 11:01:07.180006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:29:07.554 [2024-12-16 11:01:07.180017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.554 [2024-12-16 11:01:07.187006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.187052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:07.555 [2024-12-16 11:01:07.187063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.187071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.187136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.187145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:07.555 [2024-12-16 11:01:07.187153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.187165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.187221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.187231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:07.555 [2024-12-16 11:01:07.187238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.187252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.187267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.187275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:07.555 [2024-12-16 11:01:07.187282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.187290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.200994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.201205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:07.555 [2024-12-16 11:01:07.201225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.201234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.211334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.211505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:07.555 [2024-12-16 11:01:07.211524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.211533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.211588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.211598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:07.555 [2024-12-16 11:01:07.211607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.211614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.211648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.211657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:07.555 [2024-12-16 11:01:07.211665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.211679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.211742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.211752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:07.555 [2024-12-16 11:01:07.211759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.211767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.211791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.211800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:07.555 [2024-12-16 11:01:07.211808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.211816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.211854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.211865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:07.555 [2024-12-16 11:01:07.211873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.211880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.211983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.555 [2024-12-16 11:01:07.211996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:07.555 [2024-12-16 11:01:07.212005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.555 [2024-12-16 11:01:07.212013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.555 [2024-12-16 11:01:07.212144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.870 ms, result 0 00:29:07.555 00:29:07.555 00:29:07.555 11:01:07 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:07.555 [2024-12-16 11:01:07.518737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:07.555 [2024-12-16 11:01:07.519091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93417 ] 00:29:07.817 [2024-12-16 11:01:07.652253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.817 [2024-12-16 11:01:07.703589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.121 [2024-12-16 11:01:07.814631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:08.121 [2024-12-16 11:01:07.814918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:08.121 [2024-12-16 11:01:07.976113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.976335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:08.121 [2024-12-16 11:01:07.976366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:08.121 [2024-12-16 11:01:07.976376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.976454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.976465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:08.121 [2024-12-16 11:01:07.976474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:08.121 [2024-12-16 11:01:07.976490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.976513] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:08.121 [2024-12-16 11:01:07.976806] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:08.121 [2024-12-16 11:01:07.976825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.976833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:08.121 [2024-12-16 11:01:07.976847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:29:08.121 [2024-12-16 11:01:07.976860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.977316] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:08.121 [2024-12-16 11:01:07.977361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.977371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:08.121 [2024-12-16 11:01:07.977383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:08.121 [2024-12-16 11:01:07.977400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.977456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.977469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:08.121 [2024-12-16 11:01:07.977480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:08.121 [2024-12-16 11:01:07.977488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.977748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.977768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:08.121 [2024-12-16 11:01:07.977782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:29:08.121 [2024-12-16 11:01:07.977790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.977879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.977892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:08.121 [2024-12-16 11:01:07.977900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:08.121 [2024-12-16 11:01:07.977908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.977952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.977962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:08.121 [2024-12-16 11:01:07.977971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:08.121 [2024-12-16 11:01:07.977978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.978001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:08.121 [2024-12-16 11:01:07.980152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.980194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:08.121 [2024-12-16 11:01:07.980208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:29:08.121 [2024-12-16 11:01:07.980217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.980253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.980262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:08.121 [2024-12-16 11:01:07.980271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:08.121 [2024-12-16 11:01:07.980279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.980336] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:08.121 [2024-12-16 11:01:07.980363] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:08.121 [2024-12-16 11:01:07.980409] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:08.121 [2024-12-16 11:01:07.980431] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:08.121 [2024-12-16 11:01:07.980537] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:08.121 [2024-12-16 11:01:07.980551] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:08.121 [2024-12-16 11:01:07.980563] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:08.121 [2024-12-16 11:01:07.980574] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:08.121 [2024-12-16 11:01:07.980587] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:08.121 [2024-12-16 11:01:07.980598] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:08.121 [2024-12-16 11:01:07.980608] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:08.121 [2024-12-16 11:01:07.980616] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:08.121 [2024-12-16 11:01:07.980623] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:08.121 [2024-12-16 11:01:07.980630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.980638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:08.121 [2024-12-16 11:01:07.980646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:29:08.121 [2024-12-16 11:01:07.980654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.980753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.121 [2024-12-16 11:01:07.980766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:08.121 [2024-12-16 11:01:07.980778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:29:08.121 [2024-12-16 11:01:07.980794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.121 [2024-12-16 11:01:07.980892] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:08.121 [2024-12-16 11:01:07.980903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:08.121 [2024-12-16 11:01:07.980911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:08.121 [2024-12-16 11:01:07.980922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.121 [2024-12-16 11:01:07.980955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:08.121 [2024-12-16 11:01:07.980963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:08.121 [2024-12-16 11:01:07.980970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:08.121 [2024-12-16 11:01:07.980979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:08.121 [2024-12-16 11:01:07.980987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:08.121 [2024-12-16 11:01:07.980994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:08.121 [2024-12-16 11:01:07.981003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:08.121 [2024-12-16 11:01:07.981011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:08.121 [2024-12-16 11:01:07.981018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:08.122 [2024-12-16 11:01:07.981025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:08.122 [2024-12-16 11:01:07.981033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:08.122 [2024-12-16 11:01:07.981040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:08.122 [2024-12-16 11:01:07.981054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:08.122 [2024-12-16 11:01:07.981060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:08.122 [2024-12-16 11:01:07.981077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.122 [2024-12-16 11:01:07.981091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:08.122 [2024-12-16 11:01:07.981098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.122 [2024-12-16 11:01:07.981112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:08.122 [2024-12-16 11:01:07.981119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.122 [2024-12-16 11:01:07.981133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:08.122 [2024-12-16 11:01:07.981139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.122 [2024-12-16 11:01:07.981152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:08.122 [2024-12-16 11:01:07.981159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:08.122 [2024-12-16 11:01:07.981172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:08.122 [2024-12-16 11:01:07.981185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:08.122 [2024-12-16 11:01:07.981192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:08.122 [2024-12-16 11:01:07.981199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:08.122 [2024-12-16 11:01:07.981206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:08.122 [2024-12-16 11:01:07.981212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:08.122 [2024-12-16 11:01:07.981225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:08.122 [2024-12-16 11:01:07.981234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981242] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:08.122 [2024-12-16 11:01:07.981253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:08.122 [2024-12-16 11:01:07.981261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:08.122 [2024-12-16 11:01:07.981273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.122 [2024-12-16 11:01:07.981281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:08.122 [2024-12-16 11:01:07.981288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:08.122 [2024-12-16 11:01:07.981295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:08.122 [2024-12-16 11:01:07.981302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:08.122 [2024-12-16 11:01:07.981312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:08.122 [2024-12-16 11:01:07.981318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:08.122 [2024-12-16 11:01:07.981327] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:08.122 [2024-12-16 11:01:07.981339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.122 [2024-12-16 11:01:07.981348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:08.122 [2024-12-16 11:01:07.981355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:08.122 [2024-12-16 11:01:07.981362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:08.122 [2024-12-16 11:01:07.981369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:08.122 [2024-12-16 11:01:07.981376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:08.122 [2024-12-16 11:01:07.981383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:08.122 [2024-12-16 11:01:07.981390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:08.122 [2024-12-16 11:01:07.981398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:08.122 [2024-12-16 11:01:07.981405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:08.122 [2024-12-16 11:01:07.981412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:08.122 [2024-12-16 11:01:07.981419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:08.122 [2024-12-16 11:01:07.981431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:08.122 [2024-12-16 11:01:07.981440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:08.122 [2024-12-16 11:01:07.981448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:08.122 [2024-12-16 11:01:07.981456] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:08.122 [2024-12-16 11:01:07.981465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.122 [2024-12-16 11:01:07.981474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:08.122 [2024-12-16 11:01:07.981482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:08.122 [2024-12-16 11:01:07.981490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:08.122 [2024-12-16 11:01:07.981498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:08.122 [2024-12-16 11:01:07.981506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.122 [2024-12-16 11:01:07.981513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:08.122 [2024-12-16 11:01:07.981521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:29:08.122 [2024-12-16 11:01:07.981529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.122 [2024-12-16 11:01:08.000157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.122 [2024-12-16 11:01:08.000350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:08.122 [2024-12-16 11:01:08.000429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.584 ms 00:29:08.122 [2024-12-16 11:01:08.000453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.122 [2024-12-16 11:01:08.000566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.122 [2024-12-16 11:01:08.000597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:08.122 [2024-12-16 11:01:08.000618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:08.122 [2024-12-16 11:01:08.000637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.122 [2024-12-16 11:01:08.013862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.122 [2024-12-16 11:01:08.014055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:08.122 [2024-12-16 11:01:08.014122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.141 ms 00:29:08.122 [2024-12-16 11:01:08.014146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.122 [2024-12-16 11:01:08.014200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.122 [2024-12-16 11:01:08.014223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:08.122 [2024-12-16 11:01:08.014245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:08.122 [2024-12-16 11:01:08.014265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.122 [2024-12-16 11:01:08.014385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.122 [2024-12-16 11:01:08.014569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:08.122 [2024-12-16 11:01:08.014593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:08.122 [2024-12-16 11:01:08.014624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.122 [2024-12-16 11:01:08.014773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.122 [2024-12-16 11:01:08.014796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:08.122 [2024-12-16 11:01:08.014816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:29:08.122 [2024-12-16 11:01:08.014838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.122 [2024-12-16 11:01:08.021982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.122 [2024-12-16 11:01:08.022137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:08.122 [2024-12-16 11:01:08.022200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.112 ms 00:29:08.122 [2024-12-16 11:01:08.022222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.122 [2024-12-16 11:01:08.022364] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:08.122 [2024-12-16 11:01:08.022404] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:08.122 [2024-12-16 11:01:08.022437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.022512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:08.123 [2024-12-16 11:01:08.022538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:29:08.123 [2024-12-16 11:01:08.022557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.034888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.035054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:08.123 [2024-12-16 11:01:08.035123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.301 ms 00:29:08.123 [2024-12-16 11:01:08.035165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.035322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.035374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:08.123 [2024-12-16 11:01:08.035398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:29:08.123 [2024-12-16 11:01:08.035569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.035669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.035702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:08.123 [2024-12-16 11:01:08.035729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:08.123 [2024-12-16 11:01:08.035752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.036116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.036329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:08.123 [2024-12-16 11:01:08.036363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:29:08.123 [2024-12-16 11:01:08.036383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.036420] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:08.123 [2024-12-16 11:01:08.036454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.036474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:08.123 [2024-12-16 11:01:08.036494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:08.123 [2024-12-16 11:01:08.036516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.046020] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:08.123 [2024-12-16 11:01:08.046295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.046328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:08.123 [2024-12-16 11:01:08.046895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.748 ms 00:29:08.123 [2024-12-16 11:01:08.047163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.052755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.053021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:08.123 [2024-12-16 11:01:08.053140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.432 ms 00:29:08.123 [2024-12-16 11:01:08.053188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.053412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.053471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:08.123 [2024-12-16 11:01:08.053576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:08.123 [2024-12-16 11:01:08.053624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.053732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.054637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:08.123 [2024-12-16 11:01:08.054689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:08.123 [2024-12-16 11:01:08.054708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.054824] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:08.123 [2024-12-16 11:01:08.054850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.054873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:08.123 [2024-12-16 11:01:08.054890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:08.123 [2024-12-16 11:01:08.054905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.061985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.062034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:08.123 [2024-12-16 11:01:08.062051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.009 ms 00:29:08.123 [2024-12-16 11:01:08.062059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.062156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.123 [2024-12-16 11:01:08.062169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:08.123 [2024-12-16 11:01:08.062178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:08.123 [2024-12-16 11:01:08.062189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.123 [2024-12-16 11:01:08.063328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 86.765 ms, result 0 00:29:09.512  [2024-12-16T11:01:10.447Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-16T11:01:11.390Z] Copying: 34/1024 [MB] (18 MBps) [2024-12-16T11:01:12.333Z] Copying: 51/1024 [MB] (17 MBps) [2024-12-16T11:01:13.279Z] Copying: 66/1024 [MB] (15 MBps) [2024-12-16T11:01:14.666Z] Copying: 85/1024 [MB] (18 MBps) [2024-12-16T11:01:15.306Z] Copying: 105/1024 [MB] (20 MBps) [2024-12-16T11:01:16.689Z] Copying: 123/1024 [MB] (17 MBps) [2024-12-16T11:01:17.261Z] Copying: 142/1024 [MB] (18 MBps) [2024-12-16T11:01:18.650Z] Copying: 160/1024 [MB] (18 MBps) [2024-12-16T11:01:19.594Z] Copying: 173/1024 [MB] (13 MBps) [2024-12-16T11:01:20.540Z] Copying: 184/1024 [MB] (10 MBps) [2024-12-16T11:01:21.485Z] Copying: 195/1024 [MB] (10 MBps) [2024-12-16T11:01:22.431Z] Copying: 205/1024 [MB] (10 MBps) [2024-12-16T11:01:23.376Z] Copying: 215/1024 [MB] (10 MBps) [2024-12-16T11:01:24.321Z] Copying: 226/1024 [MB] (10 MBps) [2024-12-16T11:01:25.266Z] Copying: 237/1024 [MB] (11 MBps) [2024-12-16T11:01:26.654Z] Copying: 248/1024 [MB] (10 MBps) [2024-12-16T11:01:27.598Z] Copying: 258/1024 [MB] (10 MBps) [2024-12-16T11:01:28.544Z] Copying: 269/1024 [MB] (10 MBps) [2024-12-16T11:01:29.489Z] Copying: 281/1024 [MB] (11 MBps) [2024-12-16T11:01:30.434Z] Copying: 292/1024 [MB] (11 MBps) [2024-12-16T11:01:31.380Z] Copying: 303/1024 [MB] (11 MBps) [2024-12-16T11:01:32.325Z] Copying: 314/1024 [MB] (10 MBps) [2024-12-16T11:01:33.270Z] Copying: 324/1024 [MB] (10 MBps) [2024-12-16T11:01:34.656Z] Copying: 335/1024 [MB] (10 MBps) [2024-12-16T11:01:35.601Z] Copying: 346/1024 [MB] (11 MBps) [2024-12-16T11:01:36.544Z] Copying: 357/1024 [MB] (10 MBps) [2024-12-16T11:01:37.489Z] Copying: 369/1024 [MB] (11 MBps) [2024-12-16T11:01:38.434Z] Copying: 380/1024 [MB] (11 MBps) [2024-12-16T11:01:39.379Z] Copying: 392/1024 [MB] (11 MBps) [2024-12-16T11:01:40.369Z] Copying: 402/1024 [MB] (10 MBps) [2024-12-16T11:01:41.313Z] Copying: 414/1024 [MB] (12 MBps) [2024-12-16T11:01:42.259Z] Copying: 425/1024 [MB] (10 MBps) [2024-12-16T11:01:43.646Z] Copying: 435/1024 [MB] (10 MBps) [2024-12-16T11:01:44.590Z] Copying: 449/1024 [MB] (13 MBps) [2024-12-16T11:01:45.533Z] Copying: 460/1024 [MB] (10 MBps) [2024-12-16T11:01:46.477Z] Copying: 471/1024 [MB] (10 MBps) [2024-12-16T11:01:47.422Z] Copying: 481/1024 [MB] (10 MBps) [2024-12-16T11:01:48.367Z] Copying: 492/1024 [MB] (10 MBps) [2024-12-16T11:01:49.312Z] Copying: 502/1024 [MB] (10 MBps) [2024-12-16T11:01:50.256Z] Copying: 513/1024 [MB] (10 MBps) [2024-12-16T11:01:51.639Z] Copying: 534/1024 [MB] (21 MBps) [2024-12-16T11:01:52.583Z] Copying: 554/1024 [MB] (19 MBps) [2024-12-16T11:01:53.528Z] Copying: 572/1024 [MB] (17 MBps) [2024-12-16T11:01:54.470Z] Copying: 597/1024 [MB] (25 MBps) [2024-12-16T11:01:55.414Z] Copying: 619/1024 [MB] (21 MBps) [2024-12-16T11:01:56.351Z] Copying: 642/1024 [MB] (22 MBps) [2024-12-16T11:01:57.293Z] Copying: 663/1024 [MB] (21 MBps) [2024-12-16T11:01:58.678Z] Copying: 687/1024 [MB] (23 MBps) [2024-12-16T11:01:59.248Z] Copying: 710/1024 [MB] (22 MBps) [2024-12-16T11:02:00.635Z] Copying: 730/1024 [MB] (19 MBps) [2024-12-16T11:02:01.580Z] Copying: 743/1024 [MB] (13 MBps) [2024-12-16T11:02:02.525Z] Copying: 754/1024 [MB] (10 MBps) [2024-12-16T11:02:03.471Z] Copying: 764/1024 [MB] (10 MBps) [2024-12-16T11:02:04.415Z] Copying: 775/1024 [MB] (10 MBps) [2024-12-16T11:02:05.359Z] Copying: 786/1024 [MB] (10 MBps) [2024-12-16T11:02:06.304Z] Copying: 797/1024 [MB] (11 MBps) [2024-12-16T11:02:07.282Z] Copying: 807/1024 [MB] (10 MBps) [2024-12-16T11:02:08.671Z] Copying: 820/1024 [MB] (12 MBps) [2024-12-16T11:02:09.617Z] Copying: 830/1024 [MB] (10 MBps) [2024-12-16T11:02:10.562Z] Copying: 841/1024 [MB] (10 MBps) [2024-12-16T11:02:11.507Z] Copying: 851/1024 [MB] (10 MBps) [2024-12-16T11:02:12.451Z] Copying: 866/1024 [MB] (14 MBps) [2024-12-16T11:02:13.396Z] Copying: 886/1024 [MB] (20 MBps) [2024-12-16T11:02:14.340Z] Copying: 903/1024 [MB] (17 MBps) [2024-12-16T11:02:15.283Z] Copying: 914/1024 [MB] (11 MBps) [2024-12-16T11:02:16.672Z] Copying: 925/1024 [MB] (10 MBps) [2024-12-16T11:02:17.617Z] Copying: 935/1024 [MB] (10 MBps) [2024-12-16T11:02:18.562Z] Copying: 953/1024 [MB] (17 MBps) [2024-12-16T11:02:19.507Z] Copying: 973/1024 [MB] (19 MBps) [2024-12-16T11:02:20.455Z] Copying: 986/1024 [MB] (12 MBps) [2024-12-16T11:02:21.399Z] Copying: 1002/1024 [MB] (15 MBps) [2024-12-16T11:02:22.346Z] Copying: 1015/1024 [MB] (13 MBps) [2024-12-16T11:02:22.346Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-16 11:02:22.166282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.357 [2024-12-16 11:02:22.166360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:22.357 [2024-12-16 11:02:22.166377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:22.357 [2024-12-16 11:02:22.166387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.357 [2024-12-16 11:02:22.166412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:22.357 [2024-12-16 11:02:22.167266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.357 [2024-12-16 11:02:22.167304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:22.357 [2024-12-16 11:02:22.167317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:30:22.357 [2024-12-16 11:02:22.167326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.357 [2024-12-16 11:02:22.167569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.357 [2024-12-16 11:02:22.167587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:22.357 [2024-12-16 11:02:22.167605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:30:22.357 [2024-12-16 11:02:22.167613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.357 [2024-12-16 11:02:22.167650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.357 [2024-12-16 11:02:22.167661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:22.357 [2024-12-16 11:02:22.167673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:22.357 [2024-12-16 11:02:22.167685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.357 [2024-12-16 11:02:22.167746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.357 [2024-12-16 11:02:22.167756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:22.357 [2024-12-16 11:02:22.167765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:22.357 [2024-12-16 11:02:22.167777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.357 [2024-12-16 11:02:22.167791] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:22.357 [2024-12-16 11:02:22.167804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.167999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:22.357 [2024-12-16 11:02:22.168360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:22.358 [2024-12-16 11:02:22.168647] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:22.358 [2024-12-16 11:02:22.168655] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a72a5984-caaf-4053-9bd9-3891c43e1ef0 00:30:22.358 [2024-12-16 11:02:22.168666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:22.358 [2024-12-16 11:02:22.168674] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:22.358 [2024-12-16 11:02:22.168683] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:22.358 [2024-12-16 11:02:22.168691] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:22.358 [2024-12-16 11:02:22.168699] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:22.358 [2024-12-16 11:02:22.168751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:22.358 [2024-12-16 11:02:22.168760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:22.358 [2024-12-16 11:02:22.168768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:22.358 [2024-12-16 11:02:22.168775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:22.358 [2024-12-16 11:02:22.168783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.358 [2024-12-16 11:02:22.168792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:22.358 [2024-12-16 11:02:22.168801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:30:22.358 [2024-12-16 11:02:22.168809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.172993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.358 [2024-12-16 11:02:22.173053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:22.358 [2024-12-16 11:02:22.173073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:30:22.358 [2024-12-16 11:02:22.173089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.173256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.358 [2024-12-16 11:02:22.173276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:22.358 [2024-12-16 11:02:22.173305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:30:22.358 [2024-12-16 11:02:22.173319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.181431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.181618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:22.358 [2024-12-16 11:02:22.181639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.181648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.181725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.181734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:22.358 [2024-12-16 11:02:22.181743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.181750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.181822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.181834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:22.358 [2024-12-16 11:02:22.181842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.181850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.181868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.181876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:22.358 [2024-12-16 11:02:22.181884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.181897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.196012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.196195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:22.358 [2024-12-16 11:02:22.196215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.196224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.207406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.207569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:22.358 [2024-12-16 11:02:22.207587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.207608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.207666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.207679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:22.358 [2024-12-16 11:02:22.207688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.207696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.207731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.207744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:22.358 [2024-12-16 11:02:22.207752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.207761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.207824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.207836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:22.358 [2024-12-16 11:02:22.207845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.207853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.207879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.207893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:22.358 [2024-12-16 11:02:22.207902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.207910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.207976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.207987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:22.358 [2024-12-16 11:02:22.207999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.208007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.358 [2024-12-16 11:02:22.208055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.358 [2024-12-16 11:02:22.208065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:22.358 [2024-12-16 11:02:22.208079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.358 [2024-12-16 11:02:22.208086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.359 [2024-12-16 11:02:22.208220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.907 ms, result 0 00:30:22.620 00:30:22.620 00:30:22.620 11:02:22 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:25.166 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:25.166 11:02:24 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:25.166 [2024-12-16 11:02:24.764790] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:30:25.166 [2024-12-16 11:02:24.765135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94189 ] 00:30:25.166 [2024-12-16 11:02:24.899126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.166 [2024-12-16 11:02:24.953047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.166 [2024-12-16 11:02:25.068978] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:25.166 [2024-12-16 11:02:25.069230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:25.430 [2024-12-16 11:02:25.230754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.231102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:25.430 [2024-12-16 11:02:25.231140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:25.430 [2024-12-16 11:02:25.231151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.231238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.231249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:25.430 [2024-12-16 11:02:25.231259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:25.430 [2024-12-16 11:02:25.231273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.231295] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:25.430 [2024-12-16 11:02:25.231571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:25.430 [2024-12-16 11:02:25.231589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.231599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:25.430 [2024-12-16 11:02:25.231616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:30:25.430 [2024-12-16 11:02:25.231625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.231911] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:25.430 [2024-12-16 11:02:25.231971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.231980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:25.430 [2024-12-16 11:02:25.231995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:25.430 [2024-12-16 11:02:25.232004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.232059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.232071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:25.430 [2024-12-16 11:02:25.232082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:25.430 [2024-12-16 11:02:25.232090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.232351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.232371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:25.430 [2024-12-16 11:02:25.232380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:30:25.430 [2024-12-16 11:02:25.232388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.232521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.232541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:25.430 [2024-12-16 11:02:25.232550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:25.430 [2024-12-16 11:02:25.232558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.232583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.232592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:25.430 [2024-12-16 11:02:25.232604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:25.430 [2024-12-16 11:02:25.232611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.232634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:25.430 [2024-12-16 11:02:25.234979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.235023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:25.430 [2024-12-16 11:02:25.235038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:30:25.430 [2024-12-16 11:02:25.235046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.235083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.235092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:25.430 [2024-12-16 11:02:25.235101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:25.430 [2024-12-16 11:02:25.235115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.235170] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:25.430 [2024-12-16 11:02:25.235193] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:25.430 [2024-12-16 11:02:25.235238] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:25.430 [2024-12-16 11:02:25.235256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:25.430 [2024-12-16 11:02:25.235360] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:25.430 [2024-12-16 11:02:25.235373] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:25.430 [2024-12-16 11:02:25.235385] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:25.430 [2024-12-16 11:02:25.235396] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:25.430 [2024-12-16 11:02:25.235405] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:25.430 [2024-12-16 11:02:25.235417] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:25.430 [2024-12-16 11:02:25.235427] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:25.430 [2024-12-16 11:02:25.235435] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:25.430 [2024-12-16 11:02:25.235443] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:25.430 [2024-12-16 11:02:25.235450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.235458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:25.430 [2024-12-16 11:02:25.235470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:30:25.430 [2024-12-16 11:02:25.235478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.235564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.430 [2024-12-16 11:02:25.235572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:25.430 [2024-12-16 11:02:25.235580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:25.430 [2024-12-16 11:02:25.235590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.430 [2024-12-16 11:02:25.235687] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:25.430 [2024-12-16 11:02:25.235697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:25.430 [2024-12-16 11:02:25.235706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:25.430 [2024-12-16 11:02:25.235716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.430 [2024-12-16 11:02:25.235725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:25.430 [2024-12-16 11:02:25.235732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:25.430 [2024-12-16 11:02:25.235739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:25.430 [2024-12-16 11:02:25.235746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:25.430 [2024-12-16 11:02:25.235754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:25.430 [2024-12-16 11:02:25.235761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:25.430 [2024-12-16 11:02:25.235768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:25.430 [2024-12-16 11:02:25.235776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:25.430 [2024-12-16 11:02:25.235783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:25.430 [2024-12-16 11:02:25.235790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:25.430 [2024-12-16 11:02:25.235797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:25.430 [2024-12-16 11:02:25.235803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.430 [2024-12-16 11:02:25.235811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:25.430 [2024-12-16 11:02:25.235818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:25.430 [2024-12-16 11:02:25.235825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.430 [2024-12-16 11:02:25.235834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:25.430 [2024-12-16 11:02:25.235841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:25.430 [2024-12-16 11:02:25.235848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:25.430 [2024-12-16 11:02:25.235857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:25.431 [2024-12-16 11:02:25.235863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:25.431 [2024-12-16 11:02:25.235870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:25.431 [2024-12-16 11:02:25.235877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:25.431 [2024-12-16 11:02:25.235884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:25.431 [2024-12-16 11:02:25.235890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:25.431 [2024-12-16 11:02:25.235897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:25.431 [2024-12-16 11:02:25.235904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:25.431 [2024-12-16 11:02:25.235911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:25.431 [2024-12-16 11:02:25.235917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:25.431 [2024-12-16 11:02:25.235924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:25.431 [2024-12-16 11:02:25.235958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:25.431 [2024-12-16 11:02:25.235965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:25.431 [2024-12-16 11:02:25.235977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:25.431 [2024-12-16 11:02:25.235984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:25.431 [2024-12-16 11:02:25.235991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:25.431 [2024-12-16 11:02:25.235997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:25.431 [2024-12-16 11:02:25.236003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.431 [2024-12-16 11:02:25.236011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:25.431 [2024-12-16 11:02:25.236017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:25.431 [2024-12-16 11:02:25.236025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.431 [2024-12-16 11:02:25.236031] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:25.431 [2024-12-16 11:02:25.236045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:25.431 [2024-12-16 11:02:25.236054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:25.431 [2024-12-16 11:02:25.236062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.431 [2024-12-16 11:02:25.236073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:25.431 [2024-12-16 11:02:25.236080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:25.431 [2024-12-16 11:02:25.236087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:25.431 [2024-12-16 11:02:25.236094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:25.431 [2024-12-16 11:02:25.236103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:25.431 [2024-12-16 11:02:25.236110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:25.431 [2024-12-16 11:02:25.236118] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:25.431 [2024-12-16 11:02:25.236132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:25.431 [2024-12-16 11:02:25.236141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:25.431 [2024-12-16 11:02:25.236149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:25.431 [2024-12-16 11:02:25.236157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:25.431 [2024-12-16 11:02:25.236165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:25.431 [2024-12-16 11:02:25.236172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:25.431 [2024-12-16 11:02:25.236180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:25.431 [2024-12-16 11:02:25.236189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:25.431 [2024-12-16 11:02:25.236197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:25.431 [2024-12-16 11:02:25.236204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:25.431 [2024-12-16 11:02:25.236211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:25.431 [2024-12-16 11:02:25.236226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:25.431 [2024-12-16 11:02:25.236240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:25.431 [2024-12-16 11:02:25.236249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:25.431 [2024-12-16 11:02:25.236256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:25.431 [2024-12-16 11:02:25.236263] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:25.431 [2024-12-16 11:02:25.236272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:25.431 [2024-12-16 11:02:25.236280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:25.431 [2024-12-16 11:02:25.236287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:25.431 [2024-12-16 11:02:25.236295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:25.431 [2024-12-16 11:02:25.236302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:25.431 [2024-12-16 11:02:25.236309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.236318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:25.431 [2024-12-16 11:02:25.236326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:30:25.431 [2024-12-16 11:02:25.236333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.263383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.263485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:25.431 [2024-12-16 11:02:25.263524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.999 ms 00:30:25.431 [2024-12-16 11:02:25.263561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.263817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.263856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:25.431 [2024-12-16 11:02:25.263881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:30:25.431 [2024-12-16 11:02:25.263904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.276909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.276979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:25.431 [2024-12-16 11:02:25.276995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.807 ms 00:30:25.431 [2024-12-16 11:02:25.277004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.277046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.277056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:25.431 [2024-12-16 11:02:25.277065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:25.431 [2024-12-16 11:02:25.277074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.277177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.277188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:25.431 [2024-12-16 11:02:25.277197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:25.431 [2024-12-16 11:02:25.277208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.277335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.277347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:25.431 [2024-12-16 11:02:25.277356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:30:25.431 [2024-12-16 11:02:25.277368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.284493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.284551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:25.431 [2024-12-16 11:02:25.284562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.104 ms 00:30:25.431 [2024-12-16 11:02:25.284569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.284701] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:25.431 [2024-12-16 11:02:25.284734] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:25.431 [2024-12-16 11:02:25.284744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.284754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:25.431 [2024-12-16 11:02:25.284762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:25.431 [2024-12-16 11:02:25.284770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.297281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.297332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:25.431 [2024-12-16 11:02:25.297344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.494 ms 00:30:25.431 [2024-12-16 11:02:25.297351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.297490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.297505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:25.431 [2024-12-16 11:02:25.297514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:30:25.431 [2024-12-16 11:02:25.297521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.297571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.297584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:25.431 [2024-12-16 11:02:25.297593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:25.431 [2024-12-16 11:02:25.297603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.431 [2024-12-16 11:02:25.297911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.431 [2024-12-16 11:02:25.297921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:25.431 [2024-12-16 11:02:25.297969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:30:25.432 [2024-12-16 11:02:25.297980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.298014] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:25.432 [2024-12-16 11:02:25.298030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.432 [2024-12-16 11:02:25.298041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:25.432 [2024-12-16 11:02:25.298054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:25.432 [2024-12-16 11:02:25.298069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.307537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:25.432 [2024-12-16 11:02:25.307703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.432 [2024-12-16 11:02:25.307714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:25.432 [2024-12-16 11:02:25.307724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.604 ms 00:30:25.432 [2024-12-16 11:02:25.307732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.310206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.432 [2024-12-16 11:02:25.310246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:25.432 [2024-12-16 11:02:25.310256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:30:25.432 [2024-12-16 11:02:25.310267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.310370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.432 [2024-12-16 11:02:25.310385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:25.432 [2024-12-16 11:02:25.310394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:25.432 [2024-12-16 11:02:25.310402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.310429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.432 [2024-12-16 11:02:25.310445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:25.432 [2024-12-16 11:02:25.310453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:25.432 [2024-12-16 11:02:25.310460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.310493] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:25.432 [2024-12-16 11:02:25.310502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.432 [2024-12-16 11:02:25.310513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:25.432 [2024-12-16 11:02:25.310520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:25.432 [2024-12-16 11:02:25.310527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.316949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.432 [2024-12-16 11:02:25.316999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:25.432 [2024-12-16 11:02:25.317017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.401 ms 00:30:25.432 [2024-12-16 11:02:25.317025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.317120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.432 [2024-12-16 11:02:25.317135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:25.432 [2024-12-16 11:02:25.317144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:25.432 [2024-12-16 11:02:25.317152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.432 [2024-12-16 11:02:25.318359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 87.114 ms, result 0 00:30:26.375  [2024-12-16T11:02:27.751Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-16T11:02:28.694Z] Copying: 26/1024 [MB] (10 MBps) [2024-12-16T11:02:29.639Z] Copying: 41/1024 [MB] (14 MBps) [2024-12-16T11:02:30.584Z] Copying: 52/1024 [MB] (11 MBps) [2024-12-16T11:02:31.521Z] Copying: 63/1024 [MB] (10 MBps) [2024-12-16T11:02:32.485Z] Copying: 85/1024 [MB] (22 MBps) [2024-12-16T11:02:33.421Z] Copying: 116/1024 [MB] (31 MBps) [2024-12-16T11:02:34.365Z] Copying: 147/1024 [MB] (30 MBps) [2024-12-16T11:02:35.754Z] Copying: 159/1024 [MB] (12 MBps) [2024-12-16T11:02:36.699Z] Copying: 169/1024 [MB] (10 MBps) [2024-12-16T11:02:37.643Z] Copying: 179/1024 [MB] (10 MBps) [2024-12-16T11:02:38.588Z] Copying: 191/1024 [MB] (11 MBps) [2024-12-16T11:02:39.534Z] Copying: 205/1024 [MB] (14 MBps) [2024-12-16T11:02:40.477Z] Copying: 219/1024 [MB] (13 MBps) [2024-12-16T11:02:41.422Z] Copying: 232/1024 [MB] (13 MBps) [2024-12-16T11:02:42.359Z] Copying: 248628/1048576 [kB] (10156 kBps) [2024-12-16T11:02:43.729Z] Copying: 262/1024 [MB] (20 MBps) [2024-12-16T11:02:44.662Z] Copying: 290/1024 [MB] (27 MBps) [2024-12-16T11:02:45.599Z] Copying: 317/1024 [MB] (27 MBps) [2024-12-16T11:02:46.532Z] Copying: 343/1024 [MB] (25 MBps) [2024-12-16T11:02:47.466Z] Copying: 384/1024 [MB] (41 MBps) [2024-12-16T11:02:48.399Z] Copying: 410/1024 [MB] (25 MBps) [2024-12-16T11:02:49.333Z] Copying: 436/1024 [MB] (25 MBps) [2024-12-16T11:02:50.707Z] Copying: 470/1024 [MB] (34 MBps) [2024-12-16T11:02:51.640Z] Copying: 509/1024 [MB] (38 MBps) [2024-12-16T11:02:52.573Z] Copying: 537/1024 [MB] (28 MBps) [2024-12-16T11:02:53.508Z] Copying: 568/1024 [MB] (30 MBps) [2024-12-16T11:02:54.452Z] Copying: 611/1024 [MB] (42 MBps) [2024-12-16T11:02:55.399Z] Copying: 624/1024 [MB] (12 MBps) [2024-12-16T11:02:56.342Z] Copying: 649536/1048576 [kB] (10216 kBps) [2024-12-16T11:02:57.730Z] Copying: 662/1024 [MB] (28 MBps) [2024-12-16T11:02:58.721Z] Copying: 673/1024 [MB] (10 MBps) [2024-12-16T11:02:59.671Z] Copying: 683/1024 [MB] (10 MBps) [2024-12-16T11:03:00.613Z] Copying: 700/1024 [MB] (16 MBps) [2024-12-16T11:03:01.554Z] Copying: 717/1024 [MB] (17 MBps) [2024-12-16T11:03:02.498Z] Copying: 736/1024 [MB] (18 MBps) [2024-12-16T11:03:03.447Z] Copying: 754/1024 [MB] (18 MBps) [2024-12-16T11:03:04.392Z] Copying: 774/1024 [MB] (20 MBps) [2024-12-16T11:03:05.336Z] Copying: 793/1024 [MB] (18 MBps) [2024-12-16T11:03:06.722Z] Copying: 814/1024 [MB] (21 MBps) [2024-12-16T11:03:07.666Z] Copying: 829/1024 [MB] (15 MBps) [2024-12-16T11:03:08.611Z] Copying: 841/1024 [MB] (11 MBps) [2024-12-16T11:03:09.554Z] Copying: 854/1024 [MB] (13 MBps) [2024-12-16T11:03:10.499Z] Copying: 864/1024 [MB] (10 MBps) [2024-12-16T11:03:11.443Z] Copying: 877/1024 [MB] (12 MBps) [2024-12-16T11:03:12.388Z] Copying: 908496/1048576 [kB] (10168 kBps) [2024-12-16T11:03:13.334Z] Copying: 897/1024 [MB] (10 MBps) [2024-12-16T11:03:14.720Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-16T11:03:15.661Z] Copying: 939224/1048576 [kB] (10144 kBps) [2024-12-16T11:03:16.604Z] Copying: 930/1024 [MB] (13 MBps) [2024-12-16T11:03:17.550Z] Copying: 941/1024 [MB] (10 MBps) [2024-12-16T11:03:18.495Z] Copying: 956/1024 [MB] (15 MBps) [2024-12-16T11:03:19.439Z] Copying: 969/1024 [MB] (13 MBps) [2024-12-16T11:03:20.381Z] Copying: 988/1024 [MB] (18 MBps) [2024-12-16T11:03:21.762Z] Copying: 1000/1024 [MB] (12 MBps) [2024-12-16T11:03:22.022Z] Copying: 1023/1024 [MB] (22 MBps) [2024-12-16T11:03:22.022Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-16 11:03:22.012991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.033 [2024-12-16 11:03:22.013076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:22.033 [2024-12-16 11:03:22.013094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:22.033 [2024-12-16 11:03:22.013104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.033 [2024-12-16 11:03:22.015087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:22.033 [2024-12-16 11:03:22.017641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.033 [2024-12-16 11:03:22.017693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:22.033 [2024-12-16 11:03:22.017706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.461 ms 00:31:22.033 [2024-12-16 11:03:22.017716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.296 [2024-12-16 11:03:22.029773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.296 [2024-12-16 11:03:22.029823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:22.296 [2024-12-16 11:03:22.029843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.177 ms 00:31:22.296 [2024-12-16 11:03:22.029852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.296 [2024-12-16 11:03:22.029887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.296 [2024-12-16 11:03:22.029897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:22.296 [2024-12-16 11:03:22.029905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:22.296 [2024-12-16 11:03:22.029914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.296 [2024-12-16 11:03:22.030000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.296 [2024-12-16 11:03:22.030011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:22.296 [2024-12-16 11:03:22.030020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:22.296 [2024-12-16 11:03:22.030034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.296 [2024-12-16 11:03:22.030048] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:22.296 [2024-12-16 11:03:22.030061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:31:22.296 [2024-12-16 11:03:22.030099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:22.296 [2024-12-16 11:03:22.030676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:22.297 [2024-12-16 11:03:22.030914] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:22.297 [2024-12-16 11:03:22.030922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a72a5984-caaf-4053-9bd9-3891c43e1ef0 00:31:22.297 [2024-12-16 11:03:22.030947] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:31:22.297 [2024-12-16 11:03:22.030955] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128032 00:31:22.297 [2024-12-16 11:03:22.030962] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:31:22.297 [2024-12-16 11:03:22.030971] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:31:22.297 [2024-12-16 11:03:22.030978] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:22.297 [2024-12-16 11:03:22.030987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:22.297 [2024-12-16 11:03:22.030997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:22.297 [2024-12-16 11:03:22.031004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:22.297 [2024-12-16 11:03:22.031011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:22.297 [2024-12-16 11:03:22.031019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.297 [2024-12-16 11:03:22.031028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:22.297 [2024-12-16 11:03:22.031036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:31:22.297 [2024-12-16 11:03:22.031045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.033424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.297 [2024-12-16 11:03:22.033609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:22.297 [2024-12-16 11:03:22.033628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:31:22.297 [2024-12-16 11:03:22.033638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.033776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.297 [2024-12-16 11:03:22.033785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:22.297 [2024-12-16 11:03:22.033795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:31:22.297 [2024-12-16 11:03:22.033802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.040645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.040838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:22.297 [2024-12-16 11:03:22.040862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.040871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.040974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.040985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:22.297 [2024-12-16 11:03:22.040994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.041001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.041059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.041070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:22.297 [2024-12-16 11:03:22.041084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.041092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.041113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.041125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:22.297 [2024-12-16 11:03:22.041132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.041140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.054497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.054549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:22.297 [2024-12-16 11:03:22.054561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.054575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.065482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.065531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:22.297 [2024-12-16 11:03:22.065542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.065550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.065634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.065645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:22.297 [2024-12-16 11:03:22.065654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.065666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.065704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.065713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:22.297 [2024-12-16 11:03:22.065721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.065729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.065787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.065796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:22.297 [2024-12-16 11:03:22.065805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.065812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.065837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.065849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:22.297 [2024-12-16 11:03:22.065857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.065866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.065904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.065914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:22.297 [2024-12-16 11:03:22.065922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.065958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.066006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.297 [2024-12-16 11:03:22.066017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:22.297 [2024-12-16 11:03:22.066026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.297 [2024-12-16 11:03:22.066034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.297 [2024-12-16 11:03:22.066165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.463 ms, result 0 00:31:22.869 00:31:22.869 00:31:22.869 11:03:22 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:22.869 [2024-12-16 11:03:22.829360] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:22.869 [2024-12-16 11:03:22.829780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94774 ] 00:31:23.131 [2024-12-16 11:03:22.965082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.131 [2024-12-16 11:03:23.017597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.394 [2024-12-16 11:03:23.135166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:23.394 [2024-12-16 11:03:23.135255] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:23.394 [2024-12-16 11:03:23.297364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.297425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:23.394 [2024-12-16 11:03:23.297450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:23.394 [2024-12-16 11:03:23.297459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.297519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.297534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:23.394 [2024-12-16 11:03:23.297544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:23.394 [2024-12-16 11:03:23.297562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.297587] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:23.394 [2024-12-16 11:03:23.297856] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:23.394 [2024-12-16 11:03:23.297874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.297882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:23.394 [2024-12-16 11:03:23.297891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:31:23.394 [2024-12-16 11:03:23.297899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.298216] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:23.394 [2024-12-16 11:03:23.298244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.298253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:23.394 [2024-12-16 11:03:23.298263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:23.394 [2024-12-16 11:03:23.298271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.298327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.298339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:23.394 [2024-12-16 11:03:23.298350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:23.394 [2024-12-16 11:03:23.298358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.298610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.298629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:23.394 [2024-12-16 11:03:23.298641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:31:23.394 [2024-12-16 11:03:23.298652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.298775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.298792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:23.394 [2024-12-16 11:03:23.298801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:23.394 [2024-12-16 11:03:23.298808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.298833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.298842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:23.394 [2024-12-16 11:03:23.298850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:23.394 [2024-12-16 11:03:23.298858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.298880] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:23.394 [2024-12-16 11:03:23.301042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.301084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:23.394 [2024-12-16 11:03:23.301095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.165 ms 00:31:23.394 [2024-12-16 11:03:23.301104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.301139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.301148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:23.394 [2024-12-16 11:03:23.301162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:23.394 [2024-12-16 11:03:23.301171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.394 [2024-12-16 11:03:23.301218] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:23.394 [2024-12-16 11:03:23.301241] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:23.394 [2024-12-16 11:03:23.301288] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:23.394 [2024-12-16 11:03:23.301309] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:23.394 [2024-12-16 11:03:23.301414] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:23.394 [2024-12-16 11:03:23.301426] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:23.394 [2024-12-16 11:03:23.301437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:23.394 [2024-12-16 11:03:23.301449] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:23.394 [2024-12-16 11:03:23.301457] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:23.394 [2024-12-16 11:03:23.301465] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:23.394 [2024-12-16 11:03:23.301475] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:23.394 [2024-12-16 11:03:23.301483] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:23.394 [2024-12-16 11:03:23.301494] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:23.394 [2024-12-16 11:03:23.301502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.394 [2024-12-16 11:03:23.301512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:23.394 [2024-12-16 11:03:23.301519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:31:23.394 [2024-12-16 11:03:23.301526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.395 [2024-12-16 11:03:23.301612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.395 [2024-12-16 11:03:23.301621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:23.395 [2024-12-16 11:03:23.301629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:23.395 [2024-12-16 11:03:23.301639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.395 [2024-12-16 11:03:23.301743] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:23.395 [2024-12-16 11:03:23.301753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:23.395 [2024-12-16 11:03:23.301761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:23.395 [2024-12-16 11:03:23.301770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:23.395 [2024-12-16 11:03:23.301785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:23.395 [2024-12-16 11:03:23.301800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:23.395 [2024-12-16 11:03:23.301810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:23.395 [2024-12-16 11:03:23.301824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:23.395 [2024-12-16 11:03:23.301833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:23.395 [2024-12-16 11:03:23.301841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:23.395 [2024-12-16 11:03:23.301848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:23.395 [2024-12-16 11:03:23.301854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:23.395 [2024-12-16 11:03:23.301861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:23.395 [2024-12-16 11:03:23.301875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:23.395 [2024-12-16 11:03:23.301881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:23.395 [2024-12-16 11:03:23.301896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:23.395 [2024-12-16 11:03:23.301909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:23.395 [2024-12-16 11:03:23.301916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:23.395 [2024-12-16 11:03:23.301954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:23.395 [2024-12-16 11:03:23.301961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:23.395 [2024-12-16 11:03:23.301975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:23.395 [2024-12-16 11:03:23.301981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:23.395 [2024-12-16 11:03:23.301988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:23.395 [2024-12-16 11:03:23.301995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:23.395 [2024-12-16 11:03:23.302002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:23.395 [2024-12-16 11:03:23.302009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:23.395 [2024-12-16 11:03:23.302015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:23.395 [2024-12-16 11:03:23.302022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:23.395 [2024-12-16 11:03:23.302028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:23.395 [2024-12-16 11:03:23.302035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:23.395 [2024-12-16 11:03:23.302041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:23.395 [2024-12-16 11:03:23.302048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:23.395 [2024-12-16 11:03:23.302059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:23.395 [2024-12-16 11:03:23.302067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:23.395 [2024-12-16 11:03:23.302076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:23.395 [2024-12-16 11:03:23.302084] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:23.395 [2024-12-16 11:03:23.302092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:23.395 [2024-12-16 11:03:23.302100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:23.395 [2024-12-16 11:03:23.302107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:23.395 [2024-12-16 11:03:23.302119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:23.395 [2024-12-16 11:03:23.302126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:23.395 [2024-12-16 11:03:23.302133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:23.395 [2024-12-16 11:03:23.302140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:23.395 [2024-12-16 11:03:23.302146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:23.395 [2024-12-16 11:03:23.302153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:23.395 [2024-12-16 11:03:23.302161] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:23.395 [2024-12-16 11:03:23.302172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:23.395 [2024-12-16 11:03:23.302182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:23.395 [2024-12-16 11:03:23.302191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:23.395 [2024-12-16 11:03:23.302198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:23.395 [2024-12-16 11:03:23.302205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:23.395 [2024-12-16 11:03:23.302212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:23.395 [2024-12-16 11:03:23.302219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:23.395 [2024-12-16 11:03:23.302226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:23.395 [2024-12-16 11:03:23.302232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:23.395 [2024-12-16 11:03:23.302240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:23.395 [2024-12-16 11:03:23.302247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:23.395 [2024-12-16 11:03:23.302254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:23.395 [2024-12-16 11:03:23.302276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:23.395 [2024-12-16 11:03:23.302283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:23.395 [2024-12-16 11:03:23.302290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:23.395 [2024-12-16 11:03:23.302297] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:23.395 [2024-12-16 11:03:23.302305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:23.395 [2024-12-16 11:03:23.302313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:23.395 [2024-12-16 11:03:23.302323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:23.395 [2024-12-16 11:03:23.302331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:23.395 [2024-12-16 11:03:23.302338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:23.395 [2024-12-16 11:03:23.302346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.395 [2024-12-16 11:03:23.302355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:23.395 [2024-12-16 11:03:23.302363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:31:23.395 [2024-12-16 11:03:23.302370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.395 [2024-12-16 11:03:23.323884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.395 [2024-12-16 11:03:23.324223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:23.395 [2024-12-16 11:03:23.324272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.463 ms 00:31:23.395 [2024-12-16 11:03:23.324291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.395 [2024-12-16 11:03:23.324487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.395 [2024-12-16 11:03:23.324507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:23.395 [2024-12-16 11:03:23.324527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:31:23.395 [2024-12-16 11:03:23.324542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.395 [2024-12-16 11:03:23.337588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.395 [2024-12-16 11:03:23.337641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:23.395 [2024-12-16 11:03:23.337659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.925 ms 00:31:23.395 [2024-12-16 11:03:23.337667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.395 [2024-12-16 11:03:23.337703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.395 [2024-12-16 11:03:23.337717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:23.395 [2024-12-16 11:03:23.337729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:23.395 [2024-12-16 11:03:23.337737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.395 [2024-12-16 11:03:23.337839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.395 [2024-12-16 11:03:23.337849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:23.396 [2024-12-16 11:03:23.337858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:23.396 [2024-12-16 11:03:23.337870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.338037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.338047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:23.396 [2024-12-16 11:03:23.338056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:31:23.396 [2024-12-16 11:03:23.338064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.345119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.345161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:23.396 [2024-12-16 11:03:23.345172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.033 ms 00:31:23.396 [2024-12-16 11:03:23.345192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.345312] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:23.396 [2024-12-16 11:03:23.345326] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:23.396 [2024-12-16 11:03:23.345335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.345344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:23.396 [2024-12-16 11:03:23.345352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:23.396 [2024-12-16 11:03:23.345360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.357671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.357716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:23.396 [2024-12-16 11:03:23.357726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.294 ms 00:31:23.396 [2024-12-16 11:03:23.357734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.357862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.357872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:23.396 [2024-12-16 11:03:23.357880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:23.396 [2024-12-16 11:03:23.357888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.357957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.357976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:23.396 [2024-12-16 11:03:23.357984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:23.396 [2024-12-16 11:03:23.357995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.358296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.358307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:23.396 [2024-12-16 11:03:23.358315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:31:23.396 [2024-12-16 11:03:23.358329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.358344] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:23.396 [2024-12-16 11:03:23.358354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.358361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:23.396 [2024-12-16 11:03:23.358369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:23.396 [2024-12-16 11:03:23.358383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.367798] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:23.396 [2024-12-16 11:03:23.368112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.368129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:23.396 [2024-12-16 11:03:23.368141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.711 ms 00:31:23.396 [2024-12-16 11:03:23.368149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.370645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.370683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:23.396 [2024-12-16 11:03:23.370699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.465 ms 00:31:23.396 [2024-12-16 11:03:23.370706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.370787] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:23.396 [2024-12-16 11:03:23.371420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.371437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:23.396 [2024-12-16 11:03:23.371447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:31:23.396 [2024-12-16 11:03:23.371455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.371489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.371505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:23.396 [2024-12-16 11:03:23.371514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:23.396 [2024-12-16 11:03:23.371521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.371555] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:23.396 [2024-12-16 11:03:23.371565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.371572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:23.396 [2024-12-16 11:03:23.371579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:23.396 [2024-12-16 11:03:23.371587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.377493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.377549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:23.396 [2024-12-16 11:03:23.377560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.888 ms 00:31:23.396 [2024-12-16 11:03:23.377572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.377667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.396 [2024-12-16 11:03:23.377676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:23.396 [2024-12-16 11:03:23.377685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:23.396 [2024-12-16 11:03:23.377692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.396 [2024-12-16 11:03:23.379240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.226 ms, result 0 00:31:24.848  [2024-12-16T11:03:25.779Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-16T11:03:26.722Z] Copying: 27/1024 [MB] (10 MBps) [2024-12-16T11:03:27.666Z] Copying: 42/1024 [MB] (14 MBps) [2024-12-16T11:03:28.610Z] Copying: 61/1024 [MB] (18 MBps) [2024-12-16T11:03:29.997Z] Copying: 74/1024 [MB] (13 MBps) [2024-12-16T11:03:30.940Z] Copying: 91/1024 [MB] (17 MBps) [2024-12-16T11:03:31.882Z] Copying: 107/1024 [MB] (15 MBps) [2024-12-16T11:03:32.826Z] Copying: 120/1024 [MB] (13 MBps) [2024-12-16T11:03:33.770Z] Copying: 136/1024 [MB] (15 MBps) [2024-12-16T11:03:34.713Z] Copying: 147/1024 [MB] (11 MBps) [2024-12-16T11:03:35.658Z] Copying: 160/1024 [MB] (13 MBps) [2024-12-16T11:03:36.602Z] Copying: 175/1024 [MB] (14 MBps) [2024-12-16T11:03:37.991Z] Copying: 189/1024 [MB] (14 MBps) [2024-12-16T11:03:38.936Z] Copying: 203/1024 [MB] (13 MBps) [2024-12-16T11:03:39.881Z] Copying: 218/1024 [MB] (14 MBps) [2024-12-16T11:03:40.825Z] Copying: 230/1024 [MB] (11 MBps) [2024-12-16T11:03:41.768Z] Copying: 245/1024 [MB] (15 MBps) [2024-12-16T11:03:42.715Z] Copying: 256/1024 [MB] (11 MBps) [2024-12-16T11:03:43.673Z] Copying: 268/1024 [MB] (11 MBps) [2024-12-16T11:03:44.617Z] Copying: 279/1024 [MB] (11 MBps) [2024-12-16T11:03:46.003Z] Copying: 291/1024 [MB] (12 MBps) [2024-12-16T11:03:46.576Z] Copying: 302/1024 [MB] (11 MBps) [2024-12-16T11:03:47.963Z] Copying: 314/1024 [MB] (11 MBps) [2024-12-16T11:03:48.907Z] Copying: 325/1024 [MB] (11 MBps) [2024-12-16T11:03:49.848Z] Copying: 337/1024 [MB] (11 MBps) [2024-12-16T11:03:50.819Z] Copying: 349/1024 [MB] (11 MBps) [2024-12-16T11:03:51.764Z] Copying: 360/1024 [MB] (11 MBps) [2024-12-16T11:03:52.710Z] Copying: 371/1024 [MB] (11 MBps) [2024-12-16T11:03:53.656Z] Copying: 382/1024 [MB] (10 MBps) [2024-12-16T11:03:54.601Z] Copying: 392/1024 [MB] (10 MBps) [2024-12-16T11:03:55.990Z] Copying: 402/1024 [MB] (10 MBps) [2024-12-16T11:03:56.934Z] Copying: 413/1024 [MB] (10 MBps) [2024-12-16T11:03:57.875Z] Copying: 424/1024 [MB] (11 MBps) [2024-12-16T11:03:58.819Z] Copying: 445/1024 [MB] (21 MBps) [2024-12-16T11:03:59.764Z] Copying: 468/1024 [MB] (22 MBps) [2024-12-16T11:04:00.708Z] Copying: 487/1024 [MB] (19 MBps) [2024-12-16T11:04:01.649Z] Copying: 506/1024 [MB] (19 MBps) [2024-12-16T11:04:02.593Z] Copying: 525/1024 [MB] (19 MBps) [2024-12-16T11:04:03.978Z] Copying: 551/1024 [MB] (25 MBps) [2024-12-16T11:04:04.920Z] Copying: 574/1024 [MB] (23 MBps) [2024-12-16T11:04:05.861Z] Copying: 590/1024 [MB] (15 MBps) [2024-12-16T11:04:06.804Z] Copying: 602/1024 [MB] (12 MBps) [2024-12-16T11:04:07.751Z] Copying: 624/1024 [MB] (22 MBps) [2024-12-16T11:04:08.695Z] Copying: 645/1024 [MB] (20 MBps) [2024-12-16T11:04:09.645Z] Copying: 657/1024 [MB] (12 MBps) [2024-12-16T11:04:10.590Z] Copying: 668/1024 [MB] (10 MBps) [2024-12-16T11:04:11.976Z] Copying: 678/1024 [MB] (10 MBps) [2024-12-16T11:04:12.920Z] Copying: 690/1024 [MB] (11 MBps) [2024-12-16T11:04:13.865Z] Copying: 708/1024 [MB] (17 MBps) [2024-12-16T11:04:14.809Z] Copying: 724/1024 [MB] (16 MBps) [2024-12-16T11:04:15.789Z] Copying: 738/1024 [MB] (13 MBps) [2024-12-16T11:04:16.764Z] Copying: 749/1024 [MB] (10 MBps) [2024-12-16T11:04:17.710Z] Copying: 763/1024 [MB] (14 MBps) [2024-12-16T11:04:18.656Z] Copying: 777/1024 [MB] (14 MBps) [2024-12-16T11:04:19.602Z] Copying: 789/1024 [MB] (11 MBps) [2024-12-16T11:04:20.992Z] Copying: 802/1024 [MB] (12 MBps) [2024-12-16T11:04:21.936Z] Copying: 815/1024 [MB] (13 MBps) [2024-12-16T11:04:22.879Z] Copying: 830/1024 [MB] (14 MBps) [2024-12-16T11:04:23.823Z] Copying: 843/1024 [MB] (13 MBps) [2024-12-16T11:04:24.767Z] Copying: 854/1024 [MB] (11 MBps) [2024-12-16T11:04:25.708Z] Copying: 865/1024 [MB] (10 MBps) [2024-12-16T11:04:26.649Z] Copying: 878/1024 [MB] (13 MBps) [2024-12-16T11:04:27.593Z] Copying: 895/1024 [MB] (16 MBps) [2024-12-16T11:04:28.981Z] Copying: 912/1024 [MB] (17 MBps) [2024-12-16T11:04:29.926Z] Copying: 926/1024 [MB] (13 MBps) [2024-12-16T11:04:30.871Z] Copying: 939/1024 [MB] (13 MBps) [2024-12-16T11:04:31.816Z] Copying: 950/1024 [MB] (11 MBps) [2024-12-16T11:04:32.758Z] Copying: 961/1024 [MB] (10 MBps) [2024-12-16T11:04:33.704Z] Copying: 973/1024 [MB] (11 MBps) [2024-12-16T11:04:34.649Z] Copying: 984/1024 [MB] (11 MBps) [2024-12-16T11:04:35.595Z] Copying: 995/1024 [MB] (10 MBps) [2024-12-16T11:04:36.976Z] Copying: 1005/1024 [MB] (10 MBps) [2024-12-16T11:04:37.237Z] Copying: 1018/1024 [MB] (12 MBps) [2024-12-16T11:04:37.237Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-16 11:04:37.211517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.248 [2024-12-16 11:04:37.211627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:37.248 [2024-12-16 11:04:37.211649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:37.248 [2024-12-16 11:04:37.211661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.248 [2024-12-16 11:04:37.211693] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:37.248 [2024-12-16 11:04:37.212709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.248 [2024-12-16 11:04:37.212775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:37.248 [2024-12-16 11:04:37.212790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:32:37.248 [2024-12-16 11:04:37.212802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.248 [2024-12-16 11:04:37.213400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.248 [2024-12-16 11:04:37.213449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:37.248 [2024-12-16 11:04:37.213471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:32:37.248 [2024-12-16 11:04:37.213488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.248 [2024-12-16 11:04:37.213544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.248 [2024-12-16 11:04:37.213559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:37.248 [2024-12-16 11:04:37.213571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:37.248 [2024-12-16 11:04:37.213582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.248 [2024-12-16 11:04:37.213656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.248 [2024-12-16 11:04:37.213670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:37.248 [2024-12-16 11:04:37.213681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:37.248 [2024-12-16 11:04:37.213696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.248 [2024-12-16 11:04:37.213716] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:37.248 [2024-12-16 11:04:37.213734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:37.248 [2024-12-16 11:04:37.213747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:37.248 [2024-12-16 11:04:37.213957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.213968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.213980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.213992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.214005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.214017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.214735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.214804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.214849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.214890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.214955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.215916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:37.249 [2024-12-16 11:04:37.216787] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:37.249 [2024-12-16 11:04:37.216808] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a72a5984-caaf-4053-9bd9-3891c43e1ef0 00:32:37.249 [2024-12-16 11:04:37.216821] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:37.249 [2024-12-16 11:04:37.216831] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:32:37.249 [2024-12-16 11:04:37.216842] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:32:37.249 [2024-12-16 11:04:37.216854] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:32:37.249 [2024-12-16 11:04:37.216866] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:37.249 [2024-12-16 11:04:37.216877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:37.249 [2024-12-16 11:04:37.216891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:37.249 [2024-12-16 11:04:37.216900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:37.249 [2024-12-16 11:04:37.216909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:37.249 [2024-12-16 11:04:37.216921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.249 [2024-12-16 11:04:37.216959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:37.250 [2024-12-16 11:04:37.216979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:32:37.250 [2024-12-16 11:04:37.216994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.250 [2024-12-16 11:04:37.219847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.250 [2024-12-16 11:04:37.220046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:37.250 [2024-12-16 11:04:37.220066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:32:37.250 [2024-12-16 11:04:37.220085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.250 [2024-12-16 11:04:37.220227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.250 [2024-12-16 11:04:37.220237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:37.250 [2024-12-16 11:04:37.220247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:32:37.250 [2024-12-16 11:04:37.220255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.250 [2024-12-16 11:04:37.227573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.250 [2024-12-16 11:04:37.227765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:37.250 [2024-12-16 11:04:37.227792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.250 [2024-12-16 11:04:37.227801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.250 [2024-12-16 11:04:37.227874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.250 [2024-12-16 11:04:37.227883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:37.250 [2024-12-16 11:04:37.227891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.250 [2024-12-16 11:04:37.227899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.250 [2024-12-16 11:04:37.227990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.250 [2024-12-16 11:04:37.228001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:37.250 [2024-12-16 11:04:37.228010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.250 [2024-12-16 11:04:37.228021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.250 [2024-12-16 11:04:37.228039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.250 [2024-12-16 11:04:37.228048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:37.250 [2024-12-16 11:04:37.228057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.250 [2024-12-16 11:04:37.228065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.242341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.510 [2024-12-16 11:04:37.242526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:37.510 [2024-12-16 11:04:37.242552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.510 [2024-12-16 11:04:37.242560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.253550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.510 [2024-12-16 11:04:37.253744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:37.510 [2024-12-16 11:04:37.253773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.510 [2024-12-16 11:04:37.253782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.253835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.510 [2024-12-16 11:04:37.253844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:37.510 [2024-12-16 11:04:37.253857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.510 [2024-12-16 11:04:37.253865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.253907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.510 [2024-12-16 11:04:37.253916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:37.510 [2024-12-16 11:04:37.253945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.510 [2024-12-16 11:04:37.253954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.254009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.510 [2024-12-16 11:04:37.254019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:37.510 [2024-12-16 11:04:37.254027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.510 [2024-12-16 11:04:37.254035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.254062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.510 [2024-12-16 11:04:37.254071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:37.510 [2024-12-16 11:04:37.254079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.510 [2024-12-16 11:04:37.254087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.254129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.510 [2024-12-16 11:04:37.254139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:37.510 [2024-12-16 11:04:37.254147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.510 [2024-12-16 11:04:37.254155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.254202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.510 [2024-12-16 11:04:37.254213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:37.510 [2024-12-16 11:04:37.254221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.510 [2024-12-16 11:04:37.254229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.510 [2024-12-16 11:04:37.254360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.820 ms, result 0 00:32:37.510 00:32:37.510 00:32:37.510 11:04:37 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:40.053 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92606 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92606 ']' 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92606 00:32:40.053 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92606) - No such process 00:32:40.053 Process with pid 92606 is not found 00:32:40.053 Remove shared memory files 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 92606 is not found' 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_band_md /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_l2p_l1 /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_l2p_l2 /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_l2p_l2_ctx /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_nvc_md /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_p2l_pool /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_sb /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_sb_shm /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_trim_bitmap /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_trim_log /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_trim_md /dev/hugepages/ftl_a72a5984-caaf-4053-9bd9-3891c43e1ef0_vmap 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:40.053 ************************************ 00:32:40.053 END TEST ftl_restore_fast 00:32:40.053 ************************************ 00:32:40.053 00:32:40.053 real 4m49.977s 00:32:40.053 user 4m39.159s 00:32:40.053 sys 0m10.340s 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:40.053 11:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:40.053 11:04:39 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:40.053 11:04:39 ftl -- ftl/ftl.sh@14 -- # killprocess 83785 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@950 -- # '[' -z 83785 ']' 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@954 -- # kill -0 83785 00:32:40.053 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83785) - No such process 00:32:40.053 Process with pid 83785 is not found 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 83785 is not found' 00:32:40.053 11:04:39 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:40.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.053 11:04:39 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95588 00:32:40.053 11:04:39 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95588 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@831 -- # '[' -z 95588 ']' 00:32:40.053 11:04:39 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:40.053 11:04:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:40.053 [2024-12-16 11:04:39.961756] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:40.053 [2024-12-16 11:04:39.961907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95588 ] 00:32:40.314 [2024-12-16 11:04:40.099118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.314 [2024-12-16 11:04:40.154538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.884 11:04:40 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.884 11:04:40 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:40.884 11:04:40 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:41.144 nvme0n1 00:32:41.144 11:04:41 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:41.144 11:04:41 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:41.145 11:04:41 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:41.405 11:04:41 ftl -- ftl/common.sh@28 -- # stores=60ed9392-52b5-4a8b-ab77-d108098e38ba 00:32:41.405 11:04:41 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:41.405 11:04:41 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60ed9392-52b5-4a8b-ab77-d108098e38ba 00:32:41.669 11:04:41 ftl -- ftl/ftl.sh@23 -- # killprocess 95588 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@950 -- # '[' -z 95588 ']' 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@954 -- # kill -0 95588 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@955 -- # uname 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95588 00:32:41.669 killing process with pid 95588 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95588' 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@969 -- # kill 95588 00:32:41.669 11:04:41 ftl -- common/autotest_common.sh@974 -- # wait 95588 00:32:41.968 11:04:41 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:42.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:42.249 Waiting for block devices as requested 00:32:42.249 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:42.511 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:42.511 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:42.511 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:47.804 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:47.804 Remove shared memory files 00:32:47.804 11:04:47 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:47.804 11:04:47 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:47.804 11:04:47 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:47.804 11:04:47 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:47.804 11:04:47 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:47.804 11:04:47 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:47.804 11:04:47 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:47.804 ************************************ 00:32:47.804 END TEST ftl 00:32:47.804 ************************************ 00:32:47.804 00:32:47.804 real 17m44.486s 00:32:47.804 user 19m51.431s 00:32:47.804 sys 1m20.333s 00:32:47.804 11:04:47 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.804 11:04:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:47.804 11:04:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:47.804 11:04:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:47.804 11:04:47 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:47.804 11:04:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:47.804 11:04:47 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:47.804 11:04:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:47.804 11:04:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:47.804 11:04:47 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:47.804 11:04:47 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:47.804 11:04:47 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:47.804 11:04:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:47.804 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:32:47.804 11:04:47 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:47.804 11:04:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:47.804 11:04:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:47.804 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:32:49.191 INFO: APP EXITING 00:32:49.191 INFO: killing all VMs 00:32:49.191 INFO: killing vhost app 00:32:49.191 INFO: EXIT DONE 00:32:49.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:50.025 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:50.025 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:50.025 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:50.025 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:50.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:50.860 Cleaning 00:32:50.860 Removing: /var/run/dpdk/spdk0/config 00:32:50.860 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:50.860 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:50.860 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:50.860 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:50.860 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:50.860 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:50.860 Removing: /var/run/dpdk/spdk0 00:32:50.860 Removing: /var/run/dpdk/spdk_pid69299 00:32:50.860 Removing: /var/run/dpdk/spdk_pid69457 00:32:50.860 Removing: /var/run/dpdk/spdk_pid69653 00:32:50.860 Removing: /var/run/dpdk/spdk_pid69741 00:32:50.860 Removing: /var/run/dpdk/spdk_pid69768 00:32:50.860 Removing: /var/run/dpdk/spdk_pid69875 00:32:50.860 Removing: /var/run/dpdk/spdk_pid69893 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70070 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70144 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70223 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70323 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70399 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70438 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70474 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70539 00:32:50.860 Removing: /var/run/dpdk/spdk_pid70644 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71065 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71107 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71159 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71175 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71233 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71249 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71306 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71312 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71365 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71383 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71425 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71432 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71570 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71601 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71679 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71841 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71913 00:32:50.860 Removing: /var/run/dpdk/spdk_pid71944 00:32:50.860 Removing: /var/run/dpdk/spdk_pid72377 00:32:50.860 Removing: /var/run/dpdk/spdk_pid72464 00:32:50.860 Removing: /var/run/dpdk/spdk_pid72564 00:32:50.860 Removing: /var/run/dpdk/spdk_pid72602 00:32:50.860 Removing: /var/run/dpdk/spdk_pid72626 00:32:50.860 Removing: /var/run/dpdk/spdk_pid72699 00:32:50.860 Removing: /var/run/dpdk/spdk_pid73315 00:32:50.860 Removing: /var/run/dpdk/spdk_pid73337 00:32:50.860 Removing: /var/run/dpdk/spdk_pid73794 00:32:50.860 Removing: /var/run/dpdk/spdk_pid73887 00:32:50.860 Removing: /var/run/dpdk/spdk_pid73996 00:32:50.860 Removing: /var/run/dpdk/spdk_pid74038 00:32:50.860 Removing: /var/run/dpdk/spdk_pid74058 00:32:50.860 Removing: /var/run/dpdk/spdk_pid74088 00:32:50.860 Removing: /var/run/dpdk/spdk_pid75902 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76021 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76032 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76044 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76084 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76088 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76100 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76139 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76143 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76155 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76194 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76198 00:32:50.860 Removing: /var/run/dpdk/spdk_pid76210 00:32:50.860 Removing: /var/run/dpdk/spdk_pid77568 00:32:50.860 Removing: /var/run/dpdk/spdk_pid77654 00:32:50.860 Removing: /var/run/dpdk/spdk_pid79049 00:32:50.860 Removing: /var/run/dpdk/spdk_pid80420 00:32:50.860 Removing: /var/run/dpdk/spdk_pid80474 00:32:50.860 Removing: /var/run/dpdk/spdk_pid80530 00:32:50.860 Removing: /var/run/dpdk/spdk_pid80579 00:32:50.860 Removing: /var/run/dpdk/spdk_pid80655 00:32:50.860 Removing: /var/run/dpdk/spdk_pid80726 00:32:50.860 Removing: /var/run/dpdk/spdk_pid80862 00:32:50.860 Removing: /var/run/dpdk/spdk_pid81204 00:32:50.860 Removing: /var/run/dpdk/spdk_pid81235 00:32:50.860 Removing: /var/run/dpdk/spdk_pid81676 00:32:50.860 Removing: /var/run/dpdk/spdk_pid81854 00:32:50.860 Removing: /var/run/dpdk/spdk_pid81946 00:32:50.860 Removing: /var/run/dpdk/spdk_pid82045 00:32:50.860 Removing: /var/run/dpdk/spdk_pid82087 00:32:50.860 Removing: /var/run/dpdk/spdk_pid82107 00:32:50.860 Removing: /var/run/dpdk/spdk_pid82403 00:32:50.860 Removing: /var/run/dpdk/spdk_pid82435 00:32:50.860 Removing: /var/run/dpdk/spdk_pid82486 00:32:50.860 Removing: /var/run/dpdk/spdk_pid82853 00:32:50.860 Removing: /var/run/dpdk/spdk_pid82997 00:32:50.860 Removing: /var/run/dpdk/spdk_pid83785 00:32:50.860 Removing: /var/run/dpdk/spdk_pid83906 00:32:50.860 Removing: /var/run/dpdk/spdk_pid84054 00:32:50.860 Removing: /var/run/dpdk/spdk_pid84129 00:32:50.860 Removing: /var/run/dpdk/spdk_pid84404 00:32:50.860 Removing: /var/run/dpdk/spdk_pid84657 00:32:50.860 Removing: /var/run/dpdk/spdk_pid84992 00:32:50.860 Removing: /var/run/dpdk/spdk_pid85153 00:32:50.860 Removing: /var/run/dpdk/spdk_pid85283 00:32:50.860 Removing: /var/run/dpdk/spdk_pid85319 00:32:50.860 Removing: /var/run/dpdk/spdk_pid85530 00:32:50.860 Removing: /var/run/dpdk/spdk_pid85550 00:32:50.860 Removing: /var/run/dpdk/spdk_pid85586 00:32:50.860 Removing: /var/run/dpdk/spdk_pid85754 00:32:50.860 Removing: /var/run/dpdk/spdk_pid85967 00:32:50.860 Removing: /var/run/dpdk/spdk_pid86588 00:32:50.860 Removing: /var/run/dpdk/spdk_pid87339 00:32:50.860 Removing: /var/run/dpdk/spdk_pid88039 00:32:50.860 Removing: /var/run/dpdk/spdk_pid88834 00:32:50.860 Removing: /var/run/dpdk/spdk_pid88978 00:32:50.860 Removing: /var/run/dpdk/spdk_pid89054 00:32:50.860 Removing: /var/run/dpdk/spdk_pid89771 00:32:51.122 Removing: /var/run/dpdk/spdk_pid89825 00:32:51.122 Removing: /var/run/dpdk/spdk_pid90477 00:32:51.122 Removing: /var/run/dpdk/spdk_pid90990 00:32:51.122 Removing: /var/run/dpdk/spdk_pid91683 00:32:51.122 Removing: /var/run/dpdk/spdk_pid91807 00:32:51.122 Removing: /var/run/dpdk/spdk_pid91838 00:32:51.122 Removing: /var/run/dpdk/spdk_pid91896 00:32:51.122 Removing: /var/run/dpdk/spdk_pid91942 00:32:51.122 Removing: /var/run/dpdk/spdk_pid91999 00:32:51.122 Removing: /var/run/dpdk/spdk_pid92182 00:32:51.122 Removing: /var/run/dpdk/spdk_pid92251 00:32:51.122 Removing: /var/run/dpdk/spdk_pid92312 00:32:51.122 Removing: /var/run/dpdk/spdk_pid92390 00:32:51.122 Removing: /var/run/dpdk/spdk_pid92420 00:32:51.122 Removing: /var/run/dpdk/spdk_pid92478 00:32:51.122 Removing: /var/run/dpdk/spdk_pid92606 00:32:51.122 Removing: /var/run/dpdk/spdk_pid92812 00:32:51.122 Removing: /var/run/dpdk/spdk_pid93417 00:32:51.122 Removing: /var/run/dpdk/spdk_pid94189 00:32:51.122 Removing: /var/run/dpdk/spdk_pid94774 00:32:51.122 Removing: /var/run/dpdk/spdk_pid95588 00:32:51.122 Clean 00:32:51.122 11:04:50 -- common/autotest_common.sh@1451 -- # return 0 00:32:51.122 11:04:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:51.122 11:04:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.122 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:32:51.122 11:04:51 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:51.122 11:04:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.122 11:04:51 -- common/autotest_common.sh@10 -- # set +x 00:32:51.122 11:04:51 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:51.122 11:04:51 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:51.122 11:04:51 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:51.122 11:04:51 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:51.122 11:04:51 -- spdk/autotest.sh@394 -- # hostname 00:32:51.122 11:04:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:51.384 geninfo: WARNING: invalid characters removed from testname! 00:33:17.973 11:05:16 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:19.882 11:05:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:21.783 11:05:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:23.159 11:05:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:25.700 11:05:25 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:27.076 11:05:26 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:29.624 11:05:29 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:29.624 11:05:29 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:33:29.624 11:05:29 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:33:29.624 11:05:29 -- common/autotest_common.sh@1681 -- $ lcov --version 00:33:29.885 11:05:29 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:33:29.885 11:05:29 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:29.885 11:05:29 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:29.885 11:05:29 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:29.885 11:05:29 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:29.885 11:05:29 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:29.885 11:05:29 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:29.885 11:05:29 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:29.885 11:05:29 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:29.885 11:05:29 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:29.885 11:05:29 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:29.885 11:05:29 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:29.885 11:05:29 -- scripts/common.sh@344 -- $ case "$op" in 00:33:29.885 11:05:29 -- scripts/common.sh@345 -- $ : 1 00:33:29.885 11:05:29 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:29.885 11:05:29 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:29.885 11:05:29 -- scripts/common.sh@365 -- $ decimal 1 00:33:29.885 11:05:29 -- scripts/common.sh@353 -- $ local d=1 00:33:29.885 11:05:29 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:29.885 11:05:29 -- scripts/common.sh@355 -- $ echo 1 00:33:29.885 11:05:29 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:29.885 11:05:29 -- scripts/common.sh@366 -- $ decimal 2 00:33:29.885 11:05:29 -- scripts/common.sh@353 -- $ local d=2 00:33:29.885 11:05:29 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:29.885 11:05:29 -- scripts/common.sh@355 -- $ echo 2 00:33:29.885 11:05:29 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:29.885 11:05:29 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:29.885 11:05:29 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:29.885 11:05:29 -- scripts/common.sh@368 -- $ return 0 00:33:29.885 11:05:29 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:29.885 11:05:29 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:33:29.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.885 --rc genhtml_branch_coverage=1 00:33:29.885 --rc genhtml_function_coverage=1 00:33:29.885 --rc genhtml_legend=1 00:33:29.885 --rc geninfo_all_blocks=1 00:33:29.885 --rc geninfo_unexecuted_blocks=1 00:33:29.885 00:33:29.885 ' 00:33:29.885 11:05:29 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:33:29.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.885 --rc genhtml_branch_coverage=1 00:33:29.885 --rc genhtml_function_coverage=1 00:33:29.885 --rc genhtml_legend=1 00:33:29.885 --rc geninfo_all_blocks=1 00:33:29.885 --rc geninfo_unexecuted_blocks=1 00:33:29.885 00:33:29.885 ' 00:33:29.885 11:05:29 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:33:29.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.885 --rc genhtml_branch_coverage=1 00:33:29.885 --rc genhtml_function_coverage=1 00:33:29.885 --rc genhtml_legend=1 00:33:29.885 --rc geninfo_all_blocks=1 00:33:29.885 --rc geninfo_unexecuted_blocks=1 00:33:29.885 00:33:29.885 ' 00:33:29.885 11:05:29 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:33:29.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.885 --rc genhtml_branch_coverage=1 00:33:29.885 --rc genhtml_function_coverage=1 00:33:29.885 --rc genhtml_legend=1 00:33:29.885 --rc geninfo_all_blocks=1 00:33:29.885 --rc geninfo_unexecuted_blocks=1 00:33:29.885 00:33:29.885 ' 00:33:29.885 11:05:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:29.885 11:05:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:29.885 11:05:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:29.885 11:05:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.885 11:05:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.885 11:05:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.885 11:05:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.885 11:05:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.885 11:05:29 -- paths/export.sh@5 -- $ export PATH 00:33:29.885 11:05:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.885 11:05:29 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:29.885 11:05:29 -- common/autobuild_common.sh@479 -- $ date +%s 00:33:29.885 11:05:29 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734347129.XXXXXX 00:33:29.885 11:05:29 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734347129.w5ScIG 00:33:29.885 11:05:29 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:33:29.885 11:05:29 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:33:29.885 11:05:29 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:33:29.885 11:05:29 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:33:29.885 11:05:29 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:29.885 11:05:29 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:29.885 11:05:29 -- common/autobuild_common.sh@495 -- $ get_config_params 00:33:29.885 11:05:29 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:29.885 11:05:29 -- common/autotest_common.sh@10 -- $ set +x 00:33:29.885 11:05:29 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:33:29.885 11:05:29 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:33:29.885 11:05:29 -- pm/common@17 -- $ local monitor 00:33:29.885 11:05:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.885 11:05:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.886 11:05:29 -- pm/common@25 -- $ sleep 1 00:33:29.886 11:05:29 -- pm/common@21 -- $ date +%s 00:33:29.886 11:05:29 -- pm/common@21 -- $ date +%s 00:33:29.886 11:05:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734347129 00:33:29.886 11:05:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734347129 00:33:29.886 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734347129_collect-cpu-load.pm.log 00:33:29.886 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734347129_collect-vmstat.pm.log 00:33:30.829 11:05:30 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:33:30.829 11:05:30 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:30.829 11:05:30 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:30.829 11:05:30 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:30.829 11:05:30 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:30.829 11:05:30 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:30.829 11:05:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:30.829 11:05:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:30.829 11:05:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:30.829 11:05:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.829 11:05:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:30.829 11:05:30 -- pm/common@44 -- $ pid=97275 00:33:30.829 11:05:30 -- pm/common@50 -- $ kill -TERM 97275 00:33:30.829 11:05:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.829 11:05:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:30.829 11:05:30 -- pm/common@44 -- $ pid=97276 00:33:30.829 11:05:30 -- pm/common@50 -- $ kill -TERM 97276 00:33:30.829 + [[ -n 5778 ]] 00:33:30.829 + sudo kill 5778 00:33:30.839 [Pipeline] } 00:33:30.855 [Pipeline] // timeout 00:33:30.861 [Pipeline] } 00:33:30.875 [Pipeline] // stage 00:33:30.881 [Pipeline] } 00:33:30.895 [Pipeline] // catchError 00:33:30.904 [Pipeline] stage 00:33:30.906 [Pipeline] { (Stop VM) 00:33:30.918 [Pipeline] sh 00:33:31.203 + vagrant halt 00:33:33.795 ==> default: Halting domain... 00:33:40.395 [Pipeline] sh 00:33:40.679 + vagrant destroy -f 00:33:43.225 ==> default: Removing domain... 00:33:44.182 [Pipeline] sh 00:33:44.467 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:44.477 [Pipeline] } 00:33:44.492 [Pipeline] // stage 00:33:44.497 [Pipeline] } 00:33:44.511 [Pipeline] // dir 00:33:44.516 [Pipeline] } 00:33:44.530 [Pipeline] // wrap 00:33:44.536 [Pipeline] } 00:33:44.548 [Pipeline] // catchError 00:33:44.558 [Pipeline] stage 00:33:44.560 [Pipeline] { (Epilogue) 00:33:44.573 [Pipeline] sh 00:33:44.859 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:50.148 [Pipeline] catchError 00:33:50.150 [Pipeline] { 00:33:50.161 [Pipeline] sh 00:33:50.445 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:50.446 Artifacts sizes are good 00:33:50.456 [Pipeline] } 00:33:50.470 [Pipeline] // catchError 00:33:50.480 [Pipeline] archiveArtifacts 00:33:50.488 Archiving artifacts 00:33:50.636 [Pipeline] cleanWs 00:33:50.655 [WS-CLEANUP] Deleting project workspace... 00:33:50.655 [WS-CLEANUP] Deferred wipeout is used... 00:33:50.675 [WS-CLEANUP] done 00:33:50.677 [Pipeline] } 00:33:50.691 [Pipeline] // stage 00:33:50.696 [Pipeline] } 00:33:50.710 [Pipeline] // node 00:33:50.715 [Pipeline] End of Pipeline 00:33:50.772 Finished: SUCCESS